问题描述
我试图做一个非常简单的应用程序,它会在每只眼睛显示不同的图像。我有华硕VG236H显示器和NVIDIA 3D Vision套装,立体3D快门眼镜。在我使用C#.NET框架2.0,DirectX 9的(管理的Direct X)和Visual Studio 2008年。我一直在寻找高和低例子和教程,反倒发现一对夫妇和基于那些我所创建的程序,但出于某种原因,我不能得到它的工作。
在寻找例子如何为每只眼睛显示不同的图像,很多人守在GDC 09指的是NVIDIA演示(著名GDC09-3DVision,The_In_and_Out.pdf文件)和37-40的页面。我的代码主要是基于这样的例子来构建:
- 我加载表面上两个纹理(Red.png和Blue.png)( _imageLeft和_imageRight),在函数LoadSurfaces()
- Set3D()函数将这些两个图像并排侧具有2个屏幕宽度和屏幕高度的大小为一更大的图像+ 1(_imageBuf)
- Set3D()函数的最后一行追加立体声签名继续
- 的OnPaint() - 函数将回缓冲区(_backBuf张)以及合成图像(_imageBuf)的内容吧。
当我运行程序,快门眼镜开始工作,但我只看到两个图像并排侧的屏幕上。可能有人帮帮忙,告诉我,我到底做错了什么?我认为,解决这个问题也可以帮助别人,因为还不似乎是一个简单的例子,如何用C#做到这一点。
下面是我的代码的战术部分。完整的项目可以在这里下载:
公共无效InitializeDevice()
{
PresentParameters presentParams =新PresentParameters();
presentParams.Windowed = FALSE;
presentParams.BackBufferFormat = Format.A8R8G8B8;
presentParams.BackBufferWidth = _size.Width;
presentParams.BackBufferHeight = _size.Height;
presentParams.BackBufferCount = 1;
presentParams.SwapEffect = SwapEffect.Discard;
presentParams.PresentationInterval = PresentInterval.One;
_device =新的设备(0,DeviceType.Hardware,为此,CreateFlags.SoftwareVertexProcessing,presentParams);
}
公共无效LoadSurfaces()
{
_imageBuf = _device.CreateOffscreenPlainSurface(_size.Width * 2,_size.Height + 1,Format.A8R8G8B8,游泳池。默认);
_imageLeft = Surface.FromBitmap(_device,(位图)Bitmap.FromFile(Blue.png),Pool.Default);
_imageRight = Surface.FromBitmap(_device,(位图)Bitmap.FromFile(Red.png),Pool.Default);
}
私人无效Set3D()
{
矩形destRect =新的Rectangle(0,0,_size.Width,_size.Height);
_device.StretchRectangle(_imageLeft,_size,_imageBuf,destRect,TextureFilter.None);
destRect.X = _size.Width;
_device.StretchRectangle(_imageRight,_size,_imageBuf,destRect,TextureFilter.None);
GraphicsStream gStream = _imageBuf.LockRectangle(LockFlags.None);
字节[]数据=新的字节[] {0x44进行,0x33,0x56,0x4e,// NVSTEREO_IMAGE_SIGNATURE = 0x4433564e
为0x00,0x00时,为0x0F,0×00,//屏幕宽度* 2 = 1920 * 2 = 3840 = 0x00000F00;
为0x00,0x00时,0x04的0x38,//屏幕高度= 1080 = 0x00000438;
为0x00,0x00时,为0x00,0x20的,// dwBPP = 32 = 0x00000020;
为0x00,0x00时,0×00,0×02}; // dwFlags中= SIH_SCALE_TO_FIT = 0x00000002;
gStream.Seek(_size.Width * 2 * _size.Height * 4,System.IO.SeekOrigin.Begin); //最后一行
gStream.Write(数据,0,data.Length);
gStream.Close();
_imageBuf.UnlockRectangle();
}
保护覆盖无效的OnPaint(System.Windows.Forms.PaintEventArgs E)
{
_device.BeginScene();
//获取后备缓冲,然后伸展它的表面。
_backBuf = _device.GetBackBuffer(0,0,BackBufferType.Mono);
_device.StretchRectangle(_imageBuf,新矩形(0,0,_size.Width * 2,_size.Height + 1)_backBuf,新矩形(0,0,_size.Width,_size.Height),TextureFilter。没有);
_backBuf.ReleaseGraphics();
_device.EndScene();
_device.Present();
this.Invalidate();
}
我的一个朋友找到了问题。在立体声签名的字节是在相反的顺序。下面是正确的顺序:
字节[]数据=新的字节[] {0x4e,0x56,0x33,0x44进行,// NVSTEREO_IMAGE_SIGNATURE = 0x4433564e;
为0x00,为0x0F,为0x00,0x00时,//屏幕宽度* 2 = 1920 * 2 = 3840 = 0x00000F00;
0x38,0x04的0x00时,为0x00,//屏幕高度= 1080 = 0x00000438;
为0x20,0x00时,为0x00,0x00时,// dwBPP = 32 = 0x00000020;
0X02,0×00,0×00,0×00}; // dwFlags中= SIH_SCALE_TO_FIT = 0x00000002;
中的代码此更改后完美的作品。此代码甚至可以作为一个很好的教程为别人尝试同样的事情。 :)
I’m trying to make a very simple application which would display different images on each eye. I have Asus VG236H monitor and NVIDIA 3D Vision kit, the stereo 3D shutter glasses. The I’m using C#, .NET Framework 2.0, DirectX 9 (Managed Direct X) and Visual Studio 2008. I have been searching high and low for examples and tutorials, have actually found a couple and based those I have created the program but for some reason I can’t get it working.
When looking for examples how to display different images for each eye, many people keep referring to the NVIDIA presentation at GDC 09 (the famous GDC09-3DVision-The_In_and_Out.pdf document) and the pages 37-40. My code is mainly constructed based on that example:
- I'm loading two textures (Red.png and Blue.png) on surface (_imageLeft and _imageRight), in function LoadSurfaces()
- Set3D() function puts those two images side-by-side to one bigger image which has the size of 2x Screen width and Screen height + 1 (_imageBuf).
- Set3D() function continues by appending the stereo signature on the last row.
- OnPaint()-function takes the back buffer (_backBuf) and copies the content of the combined image (_imageBuf) to it.
When I run the program, shutter glasses start working, but I only see the two images side-by-side on the screen. Could someone help out and tell me what am I doing wrong? I believe that solving this problem might also help others as there does not yet seem to be a simple example how to do this with C#.
Below are the tactical parts of my code. Complete project can be downloaded here: http://koti.mbnet.fi/jjantti2/NVStereoTest.rar
public void InitializeDevice()
{
PresentParameters presentParams = new PresentParameters();
presentParams.Windowed = false;
presentParams.BackBufferFormat = Format.A8R8G8B8;
presentParams.BackBufferWidth = _size.Width;
presentParams.BackBufferHeight = _size.Height;
presentParams.BackBufferCount = 1;
presentParams.SwapEffect = SwapEffect.Discard;
presentParams.PresentationInterval = PresentInterval.One;
_device = new Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presentParams);
}
public void LoadSurfaces()
{
_imageBuf = _device.CreateOffscreenPlainSurface(_size.Width * 2, _size.Height + 1, Format.A8R8G8B8, Pool.Default);
_imageLeft = Surface.FromBitmap(_device, (Bitmap)Bitmap.FromFile("Blue.png"), Pool.Default);
_imageRight = Surface.FromBitmap(_device, (Bitmap)Bitmap.FromFile("Red.png"), Pool.Default);
}
private void Set3D()
{
Rectangle destRect = new Rectangle(0, 0, _size.Width, _size.Height);
_device.StretchRectangle(_imageLeft, _size, _imageBuf, destRect, TextureFilter.None);
destRect.X = _size.Width;
_device.StretchRectangle(_imageRight, _size, _imageBuf, destRect, TextureFilter.None);
GraphicsStream gStream = _imageBuf.LockRectangle(LockFlags.None);
byte[] data = new byte[] {0x44, 0x33, 0x56, 0x4e, //NVSTEREO_IMAGE_SIGNATURE = 0x4433564e
0x00, 0x00, 0x0F, 0x00, //Screen width * 2 = 1920*2 = 3840 = 0x00000F00;
0x00, 0x00, 0x04, 0x38, //Screen height = 1080 = 0x00000438;
0x00, 0x00, 0x00, 0x20, //dwBPP = 32 = 0x00000020;
0x00, 0x00, 0x00, 0x02}; //dwFlags = SIH_SCALE_TO_FIT = 0x00000002;
gStream.Seek(_size.Width * 2 * _size.Height * 4, System.IO.SeekOrigin.Begin); //last row
gStream.Write(data, 0, data.Length);
gStream.Close();
_imageBuf.UnlockRectangle();
}
protected override void OnPaint(System.Windows.Forms.PaintEventArgs e)
{
_device.BeginScene();
// Get the Backbuffer then Stretch the Surface on it.
_backBuf = _device.GetBackBuffer(0, 0, BackBufferType.Mono);
_device.StretchRectangle(_imageBuf, new Rectangle(0, 0, _size.Width * 2, _size.Height + 1), _backBuf, new Rectangle(0, 0, _size.Width, _size.Height), TextureFilter.None);
_backBuf.ReleaseGraphics();
_device.EndScene();
_device.Present();
this.Invalidate();
}
A friend of mine found the problem. The bytes in the stereo signature were in reversed order. Here is the correct order:
byte[] data = new byte[] {0x4e, 0x56, 0x33, 0x44, //NVSTEREO_IMAGE_SIGNATURE = 0x4433564e;
0x00, 0x0F, 0x00, 0x00, //Screen width * 2 = 1920*2 = 3840 = 0x00000F00;
0x38, 0x04, 0x00, 0x00, //Screen height = 1080 = 0x00000438;
0x20, 0x00, 0x00, 0x00, //dwBPP = 32 = 0x00000020;
0x02, 0x00, 0x00, 0x00}; //dwFlags = SIH_SCALE_TO_FIT = 0x00000002;
The code works perfectly after this change. This code might even serve as a good tutorial for someone else attempting the same thing. :)
这篇关于如何用C#单独控制立体声帧? (NVIDIA 3D快门眼镜)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!