问题描述:

Okay, I was trying to throw in some really large number evaluation on python - of the order of 10^(10^120)- which i then realized was *quite* huge. Anyways, I then receded to 10**10**5 and 10**10**6. Checking the time difference of the two brought me to this somewhat strange finding which I could see as an inefficiency.

The finding was that when I tried `cProfile.run("x=10**10**6")`

it took *0.3s* and `cProfile.run("print 10**10**6")`

took *40s*.

Then I tried `x= 10**10**6`

which took almost no time but thereafter every time that I *interpreted* `x`

(`x`

followed by *enter*) it would take a really long time (*40s* I suppose). So, I am assuming that every time that I interpret `x`

it calculates the entire value over again.

So my question is: isn't that extremely inefficient? Say I had declared some variable in a module, `x= 10**10`

, and every time I would reference `x`

the python interpreter would compute the value of `10**10`

over and over again ?

*Gory details would be much appreciated.*

The value is not being recalculated each time you print it, the long delay you see is the cost of converting the large number into a string for displaying.

Python can calculate extremely large numbers using binary, but to turn that back into digits that can be displayed is a **lot** of work.

**For example:** (and what quite a few Euler projects ask) -

What is the sum of all the digits of, say, `2 ** 32768`

Python can use BigNum's to calculate that result as a number pretty much straight anyway, but as soon as you do:

```
sum(int(c) for c in str(my_big_number)) # ouch - that's a lot of digits to produce and store
```

So that's what's happening when you type (then press enter) the variable name/print the variable name, it's performing a conversion.