本文介绍了确保每个Hashmap存储桶/插槽具有一个值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 是否有一种方法严格确保每个Hashmap存储区的条目数量不篡改Java中的 object.hashcode()函数? 负载因子是平均数:(条目数) / (桶数)。实际上,假设我有一个容量为1000的散列表。为了说明这个例子,假设我使用了一个1的加载因子。我将要存储在HashMap中的100个对象具有错误的散列码函数,它总是返回每个对象的值相同。当我完成存储100个对象时,它们将全部映射到相同的HashMap存储桶中,并最终以LinkedList性能结束。负载因子将静默,因为100个入口/ 1000个桶= 0.1 LF 永远不会被触发,所以HashMap将永远不会被调整大小(无论如何)。 我知道这是一种不常见的情况,世界,但想提高我的理解。在HashMap中有没有办法阻止这种情况,或者至少从结构本身得到一些警告? c> HashMap 将根据密钥的哈希码始终计算使用哪个桶。如果每个密钥具有相同的哈希码,则它们都将映射到同一个存储桶。如果没有提供更好的 hashCode()实现,您无法阻止您描述的行为。您可以查看Map实现使用开放寻址(例如的 THashMap )。他们将永远只有一个入口每桶。但是性能不会提高,它们只是以一种不同的方式处理碰撞,而且它们也不会解决根本问题:一个糟糕的哈希码。 Is there a way to strictly ensure the number of entries per Hashmap bucket without tampering the the object.hashcode() function in Java?The Load Factor is an average: (# of entries) / (# of buckets). In essence, let's say I have a Hashmap of capacity 1000. For the sake of this example, say I use a Load Factor of 1. The 100 objects I'm going to be storing in the HashMap have bad hashcode function which always return the same value for every object. When I'm done storing 100 objects, they will all map of the same HashMap bucket and I eventually end up with LinkedList performance. The Load Factor will sit silent because 100 entries / 1000 buckets = 0.1 < 1. Now what happens if I put 1 M of the same objects. The HashMap will never be resized (no use anyways) as the LF will never be triggered.I know this is an uncommon scenario in real world but would like to improve my understanding. Is there a way in HashMap to prevent this or at least get some warning from the structure itself? 解决方案 A HashMap will always calculate which bucket to use based on the key's hash code. If each key has the same hash code, they will all map to the same bucket. You cannot prevent the behavior you described without providing a better hashCode() implementation.You could look at Map implementations that use open addressing (e.g. Trove's THashMap). They will always have just one entry per bucket. But the performance will not improve, they just deal with collisions in a different way, and they also won't solve your root problem : a bad hash code. 这篇关于确保每个Hashmap存储桶/插槽具有一个值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
10-20 09:28