本文介绍了python / numpy中的浮点数学在机器上不可重现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

比较跨几台不同机器的浮点计算结果,他们一直产生不同的结果。这里是一个精简的例子,它重现了这个行为:

pre $ $ $ $ $ $ $ n $ n $ rand

M = 1024
N = 2048
np.random.seed(0)

a = rand(M,N).astype(dtype = np.float32)
w = rand(N,M).astype(dtype = np.float32)

b = np.dot(a,w)
(10):
b = b + np.dot(b,a)[::1024]
np.divide(b,100.,out = b)

print b [0,:3]

不同的机器产生不同的结果,例如


  • [-2.85753540e-05 -5.94204867e-05 -2.62337649e-04]
  • [-2.85751412e-05 -
  • [-2.85754559e-05 -5.94202756e-05 -2.62337562e-04]



  • 但我也可以得到相同的结果,例如通过运行在同一年份的两个MacBook上。这种情况发生在具有相同版本的Python和numpy的计算机上,但是不一定与相同的BLAS库链接(例如,在Mac上加速框架,在Ubuntu上运行OpenBLAS)。但是,不应该不同的数字库都符合相同的IEEE浮点标准,并给出完全相同的结果吗?

    解决方案

    浮点计算并不总是可重复的。如果您使用相同的可执行映像,输入,使用相同的编译器和相同的编译器设置(开关)构建的库,可能会得到可重复的跨不同计算机的计算结果。

    $ b

    但是,如果您使用动态链接的库,则可能会得到不同的结果,原因很多。首先,作为Veedrac 它可能会在不同的体系结构上为其例程使用不同的算法。其次,编译器可能会根据开关产生不同的代码(各种优化,控制设置)。即使是 a + b + c 也会在机器和编译器中产生非确定性的结果,因为我们无法确定评估顺序和中间计算的精度。 b
    $ b

    请阅读。


    Comparing the results of a floating point computation across a couple of different machines, they are consistently producing different results. Here is a stripped down example that reproduces the behavior:

    import numpy as np
    from numpy.random import randn as rand
    
    M = 1024
    N = 2048
    np.random.seed(0)
    
    a = rand(M,N).astype(dtype=np.float32)
    w = rand(N,M).astype(dtype=np.float32)
    
    b = np.dot(a, w)
    for i in range(10):
        b = b + np.dot(b, a)[:, :1024]
        np.divide(b, 100., out=b)
    
    print b[0,:3]
    

    Different machines produce different results like

    • [ -2.85753540e-05 -5.94204867e-05 -2.62337649e-04]
    • [ -2.85751412e-05 -5.94208468e-05 -2.62336689e-04]
    • [ -2.85754559e-05 -5.94202756e-05 -2.62337562e-04]

    but I can also get identical results, e.g. by running on two MacBooks of the same vintage. This happens with machines that have the same version of Python and numpy, but not necessarily linked against the same BLAS libraries (e.g accelerate framework on Mac, OpenBLAS on Ubuntu). However, shouldn't different numerical libraries all conform to the same IEEE floating point standard and give exactly the same results?

    解决方案

    Floating point calculations are not always reproducible.

    You may get reproducible results for floating calculations across different machines if you use the same executable image, inputs, libraries built with the same compiler and identical compiler settings (switches).

    However if you use a dynamically linked library you may get different results, because of numerous reasons. First of all, as Veedrac pointed in comments it might use different algorithms for its routines on different architectures. Second, a compiler might produce different code depending on switches (various optimizations, control settings). Even a+b+c yields non-deterministic results across machines and compilers, because we can not be sure about order of evaluation, precision in intermediate calculations.

    Read here why it is not guaranteed to get identical results on different IEEE 754-1985 implementations. New standard (IEEE 754-2008) tries to go further, but it still doesn't guarantee identical results among different implementations, because for example it allows implementers to choose when tinyness (underflow exception) is detected

    More information about floating point determinism can be found in this article.

    这篇关于python / numpy中的浮点数学在机器上不可重现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-12 15:45