I am curious what this function does and why its useful. I know this does type conversion of float to integer, any explanation in detail would be grateful.

``unsigned int func(float t){return *(unsigned int *)&t;}``

Thanks

Assuming a `float` and a `unsigned int` are the same size, it gives an `unsigned int` value that is represented using the same binary representation (underlying bits) as a supplied `float`.

The caller can then apply bitwise operations to the returned value, and access the individual bits (e.g. the sign bit, the bits that make up the exponent and mantissa) separately.

The mechanics are that `(unsigned int *)` converts `&t` into a pointer to `unsigned int`. The `*` then obtains the value at that location. That last step formally has undefined behaviour.

For an implementation (compiler) for which `float` and `unsigned int` have different sizes, the behaviour could be anything.

It returns the `unsigned integer` whose binary representation is the same as the binary representation of the given `float`.

``````uint_var = func(float_var);
``````

is essentially equivalent to:

``````memcpy(&uint_var, &float_var, sizeof(uint_var));
``````

Type punning like this results in undefined behavior, so code like this is not portable. However, it's not uncommon in low-level programming, where the implementation-dependent behavior of the compiler is known.

This doesn't exactly convert a float to an int per se. On most (practically all) platforms, a float is a 32-bit entity with the following four bytes:

1. Sign+7bits of exponent
2. 8thBitOfExponent+first7bitsOfMantissa
3. Next 8 of mantissa
4. Last 8 of mantissa

Whereas an unsigned is just 32 bits of number (in endianness dictated by platform).

A straight float->unsigned int conversion would try to shoehorn the actual value of the float into the closest unsigned it can fit inside. This code straight copies the bits that make up the float without trying to interpret what they mean. So 1.0f translates to 0x3f800000 (assuming big endian).

The above makes a fair number of grody assumptions about platform (on some platforms, you'll have a size mismatch and could end up with truncation or even memory corruption :-( ). I'm also not exactly sure why you'd want to do this at all (maybe to do bit ops a bit easier? Serialization?). Anyway, I'd personally prefer doing an explicit memcpy() to make it more obvious what's going on.

Top