问题描述:

I have a problem in calculation with `matlab`

.

I know that "`pi`

" is a floating number and is not exact. So, in matlab `sin(pi)`

is not exactly zero.

My question is if "`pi`

" is not exact then why `sin(pi/2)`

is exactly equal 1.

`sin(pi)`

--> is not exact beacause of `pi`

.

but

`sin(pi/2)`

is exactly equal 1

I am wonder and confused!

I dont know the exact way that Matlab calculates `sin(x)`

- but you can investigate this by calculating it using the power series, i.e.

```
sin x = x - (x^3)/3! + (x^5)/5! - (x^7)/7! + (x^9)/9! ...
```

Turning this into some Matlab code we represent it by:

```
clc
x = pi; % or x = pi/2
res = x;
factor = -1;
for ii=3:2:19
res = res + factor*power(x,ii)/factorial(ii);
factor = factor*-1;
fprintf ( 'iteration %2i sin(x)=%1.16f\n', (ii-1)/2, res );
end
res
```

Running this code for both `x=pi`

and `x=pi/2`

you can see that the `x=pi/2`

converges on the correct result (within eps error) quite quickly (9 iterations) - while the `x=pi`

case doesn't converge in the same time frame.

Its useful to note that at 9 iterations the last factorial that is being calculated in factorial(19). The next factorial that would be calculated in this sequence is 21. This is the last factorial that can be represented with 100% accuracy due to double precision (see `help factorial`

).

So I think whats happening is that for pi/2 the mathematical solution converges on 1 to within double precision quicker that the pi case. In fact the pi case cant converge completely due to limitations in the maths and the accuracy that can be stored in a double precision result.

Having said all that the `sin(pi)`

is within `eps`

so you should use that fact for you purposes.

I've copied the results I get below (R2015b):

```
Results for PI/2
iteration 1 sin(x)=0.9248322292886504
iteration 2 sin(x)=1.0045248555348174
iteration 3 sin(x)=0.9998431013994987
iteration 4 sin(x)=1.0000035425842861
iteration 5 sin(x)=0.9999999437410510
iteration 6 sin(x)=1.0000000006627803
iteration 7 sin(x)=0.9999999999939768
iteration 8 sin(x)=1.0000000000000437
iteration 9 sin(x)=1.0000000000000000
Final Result: 1.0000000000000000
Results for PI
iteration 1 sin(x)=-2.0261201264601763
iteration 2 sin(x)=0.5240439134171688
iteration 3 sin(x)=-0.0752206159036231
iteration 4 sin(x)=0.0069252707075051
iteration 5 sin(x)=-0.0004451602382092
iteration 6 sin(x)=0.0000211425675584
iteration 7 sin(x)=-0.0000007727858894
iteration 8 sin(x)=0.0000000224195107
iteration 9 sin(x)=-0.0000000005289183
Final Result: -0.0000000005289183
```

The reason is, that `sin(pi)=0.0`

, so every small error no matter how small is is huge compared to `0`

and thus is visible.

Differently, for `sin(pi/2)=1`

: if the algorithm produces an error smaller than `eps`

(around `2.220446e-16`

), you would not see this error because `1+eps=1`

.

The error is partly the result of the imprecise input (`pi`

value is not exact) and partly the result of the round-off during the calculation. One has to look deep into the code to get it right.

Another important factor is the function itself. Considering the error propagation by looking at the Taylor's series for `pi`

and `pi/2`

we can see:

```
sin(pi+dx)=sin(pi)+cos(pi)dx+o(dx^2)=-dx+o(dx^2)
sin(pi/2+dx)=sin(pi/2)+cos(pi/2)dx+o(dx^2)=1+o(dx^2)
```

It is clear: if `dx`

is about `eps`

, than the error due to the imprecise input will be about `eps*eps`

and thus not visible as compared to `1`

.