本文介绍了数组的查找时间复杂度及其存储方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

众所周知,按索引访问数组的时间复杂度为O(1)。



Java ArrayList 对其 get 操作表示相同:

通过获取给定索引处元素的内存地址而与数组大小无关(例如 start_address + element_size * index )。我的理解是,数组的元素必须彼此相邻地存储在内存中才能实现这种查找机制。



但是,来自,我了解到Java中的数组不能保证具有其元素连续存储在内存中。如果真是这样,怎么会一直是O(1)?






编辑:我很清楚 ArrayList 的工作方式。我的观点是,如果JVM规范不能保证数组的连续存储,则其元素可能位于内存中的不同区域。尽管这种情况极不可能发生,但它将使上面提到的查找机制变得不可能,并且JVM将具有另一种方法来进行查找,这种方式不再是O(1)。到那时,这将与该问题顶部提到的常识以及 ArrayList 关于其 get 操作。



感谢大家的回答。








据我所知,将元素存储到整个地方然后再采用其他方法是很愚蠢的

解决方案

据我所知,该规范不能保证数组将连续存储。我推测大多数JVM实现都会 。在基本情况下,执行起来很简单:如果由于其他内存占用了所需的空间而无法扩展数组,请将整个对象移到其他位置。



您的困惑源于对O(1)含义的误解。 O(1)并不意味着函数是通过单个操作执行的(例如 start_address + element_size * index )。这意味着无论输入数据的大小(在这种情况下为数组)的大小,操作都将在恒定的时间内执行。对于不连续存储的数据,这是完全可以实现的。例如,您可能有一个将索引映射到内存位置的查找表。


It's well known that the time complexity of array access by index is O(1).

The documentation of Java's ArrayList, which is backed by an array, says the same about its get operation:

The lookup is done by getting the memory address of the element at a given index independently of the array's size (something like start_address + element_size * index). My understanding is that the array's elements have to be stored next to each other in the memory for this lookup mechanism to be possible.

However, from this question, I understand that arrays in Java are not guaranteed to have their elements stored contiguously in the memory. If that is the case, how could it always be O(1)?


Edit: I'm quite aware of how an ArrayList works. My point is, if contiguous storage for arrays is not guaranteed by the JVM specification, its elements could be at different areas in the memory. Although that situation is highly unlikely, it would render the lookup mechanism mentioned above impossible, and the JVM would have another way to do the lookup, which shouldn't be O(1) anymore. At that point, it would be against both the common knowledge stated at the top of this question and the documentation of ArrayList regarding its get operation.

Thanks everybody for your answers.


Edit: In the end, I think it's a JVM-specific thing but most, if not all, JVM's stick to contiguous storage of an array's elements even when there's no guarantee, so that the lookup mechanism above can be used. It's simple, efficient and cost-effective.

As far as I can see, it would be silly to store the elements all over the place and then have to take a different approach to doing the lookup.

解决方案

As far as I'm aware, the spec gives no guarantee that arrays will be stored contiguously. I'd speculate that most JVM implementations will though. In the basic case it's simple enough to enforce: if you can't expand the array because other memory is occupying the space you need, move the whole thing somewhere else.

Your confusion stems from a misunderstanding of the meaning of O(1). O(1) does not mean that a function is performed in a single operation (e.g. start_address + element_size * index). It means that the operation is performed in a constant amount of time irrespective of the size of the input data - in this case, the array. This is perfectly achievable with data that is not stored contiguously. You could have a lookup table mapping indexes to memory locations, for example.

这篇关于数组的查找时间复杂度及其存储方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-28 18:34