When I use this code in Applescript (or another language):

``set var1 to "0.00"set var2 to "50.86"set var3 to "1335.56"set var4 to "60.72"set netto to "1447.14"set sub_totaal to var1 + var2 + var3 + var4set sub_dif to sub_totaal - netto``

The answer is: `-2.27373675443232E-13`

Why?

Not all fractional decimal values have an exact (base-2) binary representation. Most programming languages are obliged to honor the host's floating-point arithmetic implementation, which means the normal rules for commutative and associative arithmetic do not hold.

Notice that even the final decimal value: `1447.14` cannot be represented exactly as an (IEEE-754) floating-point value.

Regardless of what language you use, it's crucial to be aware of these limitations. The analysis of floating-point precision / range errors is almost a separate field in itself.

Well, to do precision math you can use unix tools. For example one such tool is "bc". So just send your calculations to the command line tools.

``````set var1 to "0.00"
set var2 to "50.86"
set var3 to "1335.56"
set var4 to "60.72"
set netto to "1447.14"

set cmd to var1 & "+" & var2 & "+" & var3 & "+" & var4
set sub_total to do shell script "echo " & quoted form of cmd & " | bc"

set cmd to sub_total & "-" & netto
set sub_dif to do shell script "echo " & quoted form of cmd & " | bc"

return {sub_total, sub_dif} --{"1447.14", "0"}
``````

Top