how to access the video buffer with lockrect?

haolei

Newcomer
Joined
Oct 1, 2003
Messages
1
Location
Baltimore maryland
Currently, we are using Ge-force FX 5900 video card (from eVGA.com) and DirectX to create a stereo 3D-space environment with the head mounted display devices.

In order to emulate the specific eye disease (for example, glaucoma) and using normal eyes subject to do research, we need modify the 2D scene after 3D rendering and projecting to 2D near plane based on the specific view pattern. We can not simply block part of scene. If this were a way, all would be simple.

What we do is that using our collaborators' 2D algorithm (http://www.svi.cps.utexas.edu/examples.htm) to filter this 2D projected image based on specific eye disease pattern before the scene is present on the screen. This requires us to read the 2D projection image out from the video buffer, filter it, then feed back to the video buffer to display it.

My questions are:
1. how to read out the video buffer by directX in VC++ ?

2. how to write back to the video buffer by directX in VC++?

3. how to using multi-video-buffer and control video card to show
the filtered image rather than directly image from the projection?


Someone gave the answer as follows:
------------------------------------------------------------------------------
The IDirect3dDevice9 interface has two methods: GetFrontBufferData() and GetBackBuffer(). Both methods allow you to read frame buffer data that was previously rendered to your device. The methods basically return a pointer to the pixel data.
You should NOT use these methods for what you are trying to achieve because both methods will have a major impact on your performance. Your simulation will not run interactively if you take this approach.

The better way:
1. create a separate IDirect3DSurface9,
2. attach it to the IDirect3DDevice9,
3. render the scene,
4. detach the surface and do a LockRect() on it which will return a pointer to the image data.
5. While all this is going on the graphics card can be used for other things
6. (example rendering and displaying the next frame).
---- end of answer -------------------------------------


So far, I could not figure out how to do it.

Could anyone give me the sample code to demo how to do these?


Thanks,


Haolei:confused:
 
Can you get the surface detached? That basically removes the surface from video memory and puts it in system memory so that the LockRect (reading the pixels) will be faster.

If you search the following folder for LockRect you'll see it used in a few places:
C:\DXSDK\Samples\C++\

I've used it in C# to much success, though I've locked a surface attached to a texture. I've never tried detaching the surface from the device.

Here's a code snippet from ...\Common\Src\d3dfont.cpp (the block below says c#, but the code is C++):
C#:
    D3DLOCKED_RECT d3dlr;
    m_pTexture->LockRect( 0, &d3dlr, 0, 0 );
    BYTE* pDstRow;
    pDstRow = (BYTE*)d3dlr.pBits;
    WORD* pDst16;
    BYTE bAlpha; // 4-bit measure of pixel intensity
    DWORD x, y;

    for( y=0; y < m_dwTexHeight; y++ )
    {
        pDst16 = (WORD*)pDstRow;
        for( x=0; x < m_dwTexWidth; x++ )
        {
            bAlpha = (BYTE)((pBitmapBits[m_dwTexWidth*y + x] & 0xff) >> 4);
            if (bAlpha > 0)
            {
                *pDst16++ = (WORD) ((bAlpha << 12) | 0x0fff);
            }
            else
            {
                *pDst16++ = 0x0000;
            }
        }
        pDstRow += d3dlr.Pitch;
    }

-Nerseus
 
Back
Top