本文介绍了如何获取.NET垃圾收集积极?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个用于图像处理的应用程序,我发现自己通常分配阵列在4000x4000 USHORT大小,以及偶尔的浮动等。目前,在.NET框架倾向于在此应用程序崩溃显然是随机,几乎总是与内存不足的错误。 32MB是不是一个巨大的声明,但如果.NET是分裂的内存,那么它很可能是,这些大的连续分配不会按预期。

I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected.

有没有办法告诉垃圾收集器要更积极,或碎片整理内存(如果是这样的问题)?我认识到,还有的所以GC.Collect和GC.WaitForPendingFinalizers电话,我已经通过我的code洒到pretty的宽松,但我仍然得到错误。这可能是因为我打电话使用本机codeA很多DLL函数,但是,我不知道。我已经过了C ++ code,并确保任何记忆,我宣布我删除了,但我仍把这些C#的崩溃,所以我pretty的确定它不存在。我不知道是否++调用能够与GC进行干扰,使得其背后的内存离开,因为一次互动与本地call--的C是可能的吗?如果是的话,我可以把这个功能关闭?

Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

编辑:以下是一些非常具体的code,这将导致崩溃。根据此等问题,我并不需要进行处置的BitmapSource的对象在这里。这里是幼稚的版本,在它没有GC.Collects。它一般崩溃迭代的4至10个撤销程序。这code替换空白WPF项目的构造,因为我使用WPF。我做了的BitmapSource的古怪,因为我在我的回答解释了限制@dthorpe以下以及在的。

Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question.

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        int theRows = 4000, currRows;
        int theColumns = 4000, currCols;
        int theMaxChange = 30;
        int i;
        List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
        byte[] displayBuffer = null;//the buffer used as a bitmap source
        BitmapSource theSource = null;
        for (i = 0; i < theMaxChange; i++) {
            currRows = theRows - i;
            currCols = theColumns - i;
            theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create(currCols, currRows,
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
        //should get here.  If not, then theMaxChange is too large.
        //Now, go back up the undo stack.
        for (i = theMaxChange - 1; i >= 0; i--) {
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to undo change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

现在,如果我明确的调用垃圾回收器,我不得不将整个code在外环造成OOM崩溃。对我来说,这往往发生绕X = 50左右:

Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so:

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        for (int x = 0; x < 1000; x++){
            int theRows = 4000, currRows;
            int theColumns = 4000, currCols;
            int theMaxChange = 30;
            int i;
            List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
            byte[] displayBuffer = null;//the buffer used as a bitmap source
            BitmapSource theSource = null;
            for (i = 0; i < theMaxChange; i++) {
                currRows = theRows - i;
                currCols = theColumns - i;
                theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create(currCols, currRows,
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            }
            //should get here.  If not, then theMaxChange is too large.
            //Now, go back up the undo stack.
            for (i = theMaxChange - 1; i >= 0; i--) {
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
                GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes
                GC.Collect();
            }
            System.Console.WriteLine("Got to changelist " + x.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

如果我在这两种情况处理不当的内存,如果有件事情我应该探查发现,让我知道。这是一个pretty的简单程序在那里。

If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there.

不幸的是,它看起来像@凯文的回答是right--这是.NET中的错误和.NET是如何处理的对象比85K大。这种情况令我好生奇怪;可以POWERPOINT改写在.NET中使用这种限制,或任何其他Office套件应用程序? 85K似乎并没有给我的一大堆的空间,而且我也很觉得使用所谓的大的分配频频任何程序将成为内使用.NET时,几天到几周的事情不稳定。

Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET.

修改:它看起来像凯文是正确的,这是.NET的GC的限制。对于那些谁不想跟着整个线程,.NET有四个GC堆:gen0,第一代,第二代和蕙(大对象堆)。一切的85K或更小的推移前三堆之一,根据创建时间(从gen0搬到了第一代第二代来,等)。对象超过85K大得到安置的LOH。蕙是永远压实,所以最终,我在做最终会导致OOM错误的对象将会散落内存空间的类型分配。我们发现,迁移到.NET 4.0确实有助于这个问题有点,延缓了异常,但不是preventing它。说实话,这种感觉有点像640K barrier-- 85K应该是足以让任何用户的应用程序(套用this在.NET中的GC讨论影片)。为了记录在案,Java没有出现这种现象,其GC。

EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

推荐答案

这里有一些文章,详细说明问题的大对象堆。这听起来像你可能会运行到什么。

Here are some articles detailing problems with the Large Object Heap. It sounds like what you might be running into.

http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception

危险:
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/

下面是关于如何收集关于大对象堆(LOH)数据链接:
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

Here is a link on how to collect data on the Large Object Heap (LOH):
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

根据这一点,似乎就没有办法来压缩蕙。我找不到任何新明确表示,如何做到这一点,所以它似乎并没有在2.0运行时改变:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx

According to this, it seems there is no way to compact the LOH. I can't find anything newer that explicitly says how to do it, and so it seems that it hasn't changed in the 2.0 runtime:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx

处理这个问题最简单的方法是,如果在所有可能使小物件。您的其他选项,仅仅是创建一些大型对象,并一遍又一遍地重复使用它们。不是思想状况,但它可能比重新编写的对象结构较好。既然你也说,创建的对象(数组)的大小不同,这可能是困难的,但它可以让应用程序崩溃。

The simple way of handling the issue is to make small objects if at all possible. Your other option to is to create only a few large objects and reuse them over and over. Not an idea situation, but it might be better than re-writing the object structure. Since you did say that the created objects (arrays) are of different sizes, it might be difficult, but it could keep the application from crashing.

这篇关于如何获取.NET垃圾收集积极?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 05:18