本文介绍了GDB - 我可以找到大内存的数据元素的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我也有静态定义变量的显著数量的程序。如果我在GDB启动它,在主一个破发点,然后运行PMAP,我看到有关于已分配100MB的数据:

I have a program which has a significant number of statically defined variables. If I start it up in GDB, with a break point in main, and then run pmap, I see there is about 100MB of data already allocated:

08838000 107576K rw---    [ anon ]

我已经找到了一堆有巨大的静态定义阵列(例如20万整数)功能,摆脱了他们,因为我已经找到了。

I've already found a pile of functions that have enormous statically defined arrays (e.g. 200,000 ints) and got rid of them as I've found them.

有没有办法找出最大的项目是堆/数据段?无论是在GDB或通过任何其他方式?

Is there any way to find out what the largest items are on the heap / data segments? Either in GDB or through any other means?

推荐答案

的信息可以通过使用对象code检查实用工具,比如纳米发现(1):

The information can be found by using the object code inspection utilities like nm(1):

nm --size-sort <object-file.o>

此外,objdump的可以给完全链接程序的其他见解,给予了足够的调试信息。

Also, objdump can give additional insights for the completely linked program, given enough debug information.

公用事业往往针对特定平台,所以当交叉编译必须注意使用正确的程序(即类似的x86_64-Linux的GNU-GCC纳米而不是仅仅纳米)。

The utilities are often target platform specific, so when cross-compiling care must be taken to use the correct program (i.e. something like x86_64-linux-gnu-gcc-nm instead of just nm).

这篇关于GDB - 我可以找到大内存的数据元素的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

11-03 01:50