本文介绍了什么是memoization有益的,是否真的有用吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在互联网上有几种自动记忆库可用于各种不同的语言;但不知道它们的用途,在哪里使用它们以及它们如何工作,可能很难看到它们的价值。什么是使用memoization的有说服力的论点,memoization尤其是闪耀的问题域?不知情的信息将在这里特别感谢。

解决方案

这里流行的因子回答是玩具答案。是的,memoization对于重复调用该函数很有用,但是这种关系是微不足道的 - 在print factorial(N)for 0..M情况下,您只是重复使用了最后一个值。



这里的其他许多示例只是缓存。这是有用的,但它忽略了单词memoization为我带来的真棒算法含义。

更有趣的是,递归函数的单个调用的不同分支碰到相同的子问题,但是在一个非平凡的模式中,实际上索引到某个缓存实际上是有用的。



例如,考虑n维绝对值总和为k的整数数组。例如。对于n = 3,k = 5 [1,-4,0],[3,-1,1],[5,0,0],[0,5,0]就是一些例子。假设V(n,k)是给定n,k的可能唯一数组的数目。它的定义是:

code> V(n,0)= 1; V(0,K)= 0; V(n,k)= V(n-1,k)+ V(n,k-1)+ V(n-1,k-1);

b

这个函数在n = 3,k = 5时给出102。

如果没有记忆,那么即使相当普通的数字,计算速度也会很慢。如果你将处理可视化为一棵树,每个节点调用V()扩展到三个孩子,你将有186,268,135,991,213,676,920,832 V(n,0)= 1在V(32,32)的计算中留下......实现天真此功能在可用硬件上很快变得无法计算。

但是树中的许多子分支都是精确的重复对象,尽管不是像一个简单的方式那样容易被淘汰,就像阶乘函数一样。通过记事我们可以合并所有这些重复的分支。事实上,在记忆中,V(32,32)只执行V()1024(n * m)次,这是一个因子10 ^ 21的加速度(它随着n的增加而增大,k明显增大)对于相当少量的内存。 :)我发现这种对算法复杂性的根本性改变比简单的缓存更令人兴奋。它可以使棘手的问题变得简单。



因为python数字是自然的数字,所以您可以在python中使用字典和元组键在9行中实现此公式。给它一个镜头,并尝试没有memoization。

There are a few automatic memoization libraries available on the internet for various different languages; but without knowing what they are for, where to use them, and how they work, it can be difficult to see their value. What are some convincing arguments for using memoization, and what problem domain does memoization especially shine in? Information for the uninformed would be especially appreciated here.

解决方案

The popular factorial answer here is something of a toy answer. Yes, memoization is useful for repeated invocations of that function, but the relationship is trivial — in the "print factorial(N) for 0..M" case you're simply reusing the last value.

Many of the other examples here are just 'caching'. Which is useful, but it ignores the awesome algorithmic implications that the word memoization carries for me.

Far more interesting are cases where different branches of single invocation of a recursive function hits identical sub-problems but in a non-trivial pattern such that actually indexing into some cache is actually useful.

For example, consider n dimensional arrays of integers whos absolute values sum to k. E.g. for n=3,k=5 [1,-4,0], [3,-1,1], [5,0,0], [0,5,0] would be some examples. Let V(n,k) be the number of possible unique arrays for a given n,k. Its definition is:

V(n,0)=1; V(0,k)=0; V(n,k) = V(n-1,k) + V(n,k-1) + V(n-1,k-1);

This function gives 102 for n=3,k=5.

Without memoization this quickly becomes very slow to compute for even fairly modest numbers. If you visualize the processing as a tree, each node an invocation of V() expanding to three children you'd have 186,268,135,991,213,676,920,832 V(n,0)=1 leaves in the computation of V(32,32)... Implemented naively this function rapidly becomes uncomputable on available hardware.

But many of the child branches in the tree are exact duplicates of each other though not in some trivial way that could easily be eliminated like the factorial function. With memoization we can merge all those duplicate branches. In fact, with memoization V(32,32) only executes V() 1024 (n*m) times which is a speedup of a factor of 10^21 (which gets larger as n,k grows, obviously) or so in exchange for a fairly small amount of memory. :) I find this kind of fundamental change to the complexity of an algorithm far more exciting than simple caching. It can make intractable problems easy.

Because python numbers are naturally bignums you can implement this formula in python with memoization using a dictionary and tuple keys in only 9 lines. Give it a shot and try it without the memoization.

这篇关于什么是memoization有益的,是否真的有用吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-22 12:01