Can a static_cast from double, assigned to double be optimized away?
Clash Royale CLAN TAG#URR8PPP
up vote
7
down vote
favorite
I stumbled on a function that I think is unnecessary, and generally scares me:
float coerceToFloat(double x)
volatile float y = static_cast<float>(x);
return y;
Which is then used like this:
// double x
double y = coerceToFloat(x);
Is this ever any different from just doing this?:
double y = static_cast<float>(x);
The intention seems to be to just strip the double down to single precision. It smells like something written out of extreme paranoia.
c++ casting floating-point
 |Â
show 4 more comments
up vote
7
down vote
favorite
I stumbled on a function that I think is unnecessary, and generally scares me:
float coerceToFloat(double x)
volatile float y = static_cast<float>(x);
return y;
Which is then used like this:
// double x
double y = coerceToFloat(x);
Is this ever any different from just doing this?:
double y = static_cast<float>(x);
The intention seems to be to just strip the double down to single precision. It smells like something written out of extreme paranoia.
c++ casting floating-point
1
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
1
I have no idea why the author of the code used avolatile
variable. The function is no different fromfloat coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.
â NathanOliver
1 hour ago
1
I mean, it's good practice to give this operation a name.coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?
â Max Langhof
1 hour ago
3
I may have found a bread crumb. This says usingvolatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.
â NathanOliver
53 mins ago
 |Â
show 4 more comments
up vote
7
down vote
favorite
up vote
7
down vote
favorite
I stumbled on a function that I think is unnecessary, and generally scares me:
float coerceToFloat(double x)
volatile float y = static_cast<float>(x);
return y;
Which is then used like this:
// double x
double y = coerceToFloat(x);
Is this ever any different from just doing this?:
double y = static_cast<float>(x);
The intention seems to be to just strip the double down to single precision. It smells like something written out of extreme paranoia.
c++ casting floating-point
I stumbled on a function that I think is unnecessary, and generally scares me:
float coerceToFloat(double x)
volatile float y = static_cast<float>(x);
return y;
Which is then used like this:
// double x
double y = coerceToFloat(x);
Is this ever any different from just doing this?:
double y = static_cast<float>(x);
The intention seems to be to just strip the double down to single precision. It smells like something written out of extreme paranoia.
c++ casting floating-point
c++ casting floating-point
asked 1 hour ago
Ben
3,6402225
3,6402225
1
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
1
I have no idea why the author of the code used avolatile
variable. The function is no different fromfloat coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.
â NathanOliver
1 hour ago
1
I mean, it's good practice to give this operation a name.coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?
â Max Langhof
1 hour ago
3
I may have found a bread crumb. This says usingvolatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.
â NathanOliver
53 mins ago
 |Â
show 4 more comments
1
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
1
I have no idea why the author of the code used avolatile
variable. The function is no different fromfloat coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.
â NathanOliver
1 hour ago
1
I mean, it's good practice to give this operation a name.coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?
â Max Langhof
1 hour ago
3
I may have found a bread crumb. This says usingvolatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.
â NathanOliver
53 mins ago
1
1
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
1
1
I have no idea why the author of the code used a
volatile
variable. The function is no different from float coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.â NathanOliver
1 hour ago
I have no idea why the author of the code used a
volatile
variable. The function is no different from float coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.â NathanOliver
1 hour ago
1
1
I mean, it's good practice to give this operation a name.
coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?â Max Langhof
1 hour ago
I mean, it's good practice to give this operation a name.
coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?â Max Langhof
1 hour ago
3
3
I may have found a bread crumb. This says using
volatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.â NathanOliver
53 mins ago
I may have found a bread crumb. This says using
volatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.â NathanOliver
53 mins ago
 |Â
show 4 more comments
3 Answers
3
active
oldest
votes
up vote
1
down vote
accepted
static_cast<float>(x)
is required to remove any excess precision, producing a float
. While the C++ standard generally permits implementations to retain excess floating-point precision in expressions, that precision must be removed by cast and assignment operators.
The license to use greater precision is in C++ draft N4659 clause 8, paragraph 13:
The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.64
Footnote 64 says:
The cast and assignment operators must still perform their specific conversions as described in 8.4, 8.2.9 and 8.18.
add a comment |Â
up vote
6
down vote
Some compilers have this concept of "extended precision", where doubles carry with them more than 64 bits of data. This results in floating point calculations that doesn't match the IEEE standard.
The above code could be an attempt to prevent extended precision flags on the compiler from removing the precision loss. Such flags explicitly violate the precision assumptions of doubles and floating point values. It seems plausible that they wouldn't do so on a volatile
variable.
add a comment |Â
up vote
4
down vote
Following up on the comment by @NathanOliver -- compilers are allowed to do floating-point math at higher precision than the types of the operands require. Typically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardware. It's only when a value is stored that it has to be reverted to the actual precision of the type. And even then, most compilers by default will do optimizations that violate this rule, because forcing that change in precision slows down the floating-point operations. Most of the time that's okay, because the extra precision isn't harmful. If you're a stickler, you can use a command-line switch to force the compiler to honor that storage rule, and you might see that your floating-point calculations are significantly slower.
In that function, marking the variable volatile
tells the compiler that it cannot elide storing that value; that, in turn, means that it has to reduce the precision of the incoming value to match the type that it's being stored in. So the hope is that this would force truncation.
And, no, writing a cast instead of calling that function is not the same, because the compiler (in its non-conforming mode) can skip the assignment to y
if it determines that it can generate better code without storing the value, and it can skip the truncation as well. Keep in mind that the goal is to run floating-point calculations as fast as possible, and having to deal with niggling rules about reducing precision for intermediate values just slows things down.
In most cases, running flat-out by skipping intermediate truncations is what serious floating-point applications need. The rule requiring truncation on storage is more of a hope than a realistic requirement.
On a side note, Java originally required that all floating-point math be done at the exact precision required by the types involved. You can do that on Intel hardware by telling it not to extend fp types to 80 bits. This was met with loud complaints from number crunchers because that makes calculations much slower. Java soon changed to the notion of "strict" fp and "non-strict" fp, and serious number crunching uses non-strict, i.e., make it as fast as the hardware supports. People who thoroughly understand floating-point math (that does not include me) want speed, and know how to cope with the differences in precision that result.
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
 |Â
show 1 more comment
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
static_cast<float>(x)
is required to remove any excess precision, producing a float
. While the C++ standard generally permits implementations to retain excess floating-point precision in expressions, that precision must be removed by cast and assignment operators.
The license to use greater precision is in C++ draft N4659 clause 8, paragraph 13:
The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.64
Footnote 64 says:
The cast and assignment operators must still perform their specific conversions as described in 8.4, 8.2.9 and 8.18.
add a comment |Â
up vote
1
down vote
accepted
static_cast<float>(x)
is required to remove any excess precision, producing a float
. While the C++ standard generally permits implementations to retain excess floating-point precision in expressions, that precision must be removed by cast and assignment operators.
The license to use greater precision is in C++ draft N4659 clause 8, paragraph 13:
The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.64
Footnote 64 says:
The cast and assignment operators must still perform their specific conversions as described in 8.4, 8.2.9 and 8.18.
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
static_cast<float>(x)
is required to remove any excess precision, producing a float
. While the C++ standard generally permits implementations to retain excess floating-point precision in expressions, that precision must be removed by cast and assignment operators.
The license to use greater precision is in C++ draft N4659 clause 8, paragraph 13:
The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.64
Footnote 64 says:
The cast and assignment operators must still perform their specific conversions as described in 8.4, 8.2.9 and 8.18.
static_cast<float>(x)
is required to remove any excess precision, producing a float
. While the C++ standard generally permits implementations to retain excess floating-point precision in expressions, that precision must be removed by cast and assignment operators.
The license to use greater precision is in C++ draft N4659 clause 8, paragraph 13:
The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.64
Footnote 64 says:
The cast and assignment operators must still perform their specific conversions as described in 8.4, 8.2.9 and 8.18.
answered 26 mins ago
Eric Postpischil
68k873147
68k873147
add a comment |Â
add a comment |Â
up vote
6
down vote
Some compilers have this concept of "extended precision", where doubles carry with them more than 64 bits of data. This results in floating point calculations that doesn't match the IEEE standard.
The above code could be an attempt to prevent extended precision flags on the compiler from removing the precision loss. Such flags explicitly violate the precision assumptions of doubles and floating point values. It seems plausible that they wouldn't do so on a volatile
variable.
add a comment |Â
up vote
6
down vote
Some compilers have this concept of "extended precision", where doubles carry with them more than 64 bits of data. This results in floating point calculations that doesn't match the IEEE standard.
The above code could be an attempt to prevent extended precision flags on the compiler from removing the precision loss. Such flags explicitly violate the precision assumptions of doubles and floating point values. It seems plausible that they wouldn't do so on a volatile
variable.
add a comment |Â
up vote
6
down vote
up vote
6
down vote
Some compilers have this concept of "extended precision", where doubles carry with them more than 64 bits of data. This results in floating point calculations that doesn't match the IEEE standard.
The above code could be an attempt to prevent extended precision flags on the compiler from removing the precision loss. Such flags explicitly violate the precision assumptions of doubles and floating point values. It seems plausible that they wouldn't do so on a volatile
variable.
Some compilers have this concept of "extended precision", where doubles carry with them more than 64 bits of data. This results in floating point calculations that doesn't match the IEEE standard.
The above code could be an attempt to prevent extended precision flags on the compiler from removing the precision loss. Such flags explicitly violate the precision assumptions of doubles and floating point values. It seems plausible that they wouldn't do so on a volatile
variable.
answered 44 mins ago
Yakk - Adam Nevraumont
175k19179360
175k19179360
add a comment |Â
add a comment |Â
up vote
4
down vote
Following up on the comment by @NathanOliver -- compilers are allowed to do floating-point math at higher precision than the types of the operands require. Typically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardware. It's only when a value is stored that it has to be reverted to the actual precision of the type. And even then, most compilers by default will do optimizations that violate this rule, because forcing that change in precision slows down the floating-point operations. Most of the time that's okay, because the extra precision isn't harmful. If you're a stickler, you can use a command-line switch to force the compiler to honor that storage rule, and you might see that your floating-point calculations are significantly slower.
In that function, marking the variable volatile
tells the compiler that it cannot elide storing that value; that, in turn, means that it has to reduce the precision of the incoming value to match the type that it's being stored in. So the hope is that this would force truncation.
And, no, writing a cast instead of calling that function is not the same, because the compiler (in its non-conforming mode) can skip the assignment to y
if it determines that it can generate better code without storing the value, and it can skip the truncation as well. Keep in mind that the goal is to run floating-point calculations as fast as possible, and having to deal with niggling rules about reducing precision for intermediate values just slows things down.
In most cases, running flat-out by skipping intermediate truncations is what serious floating-point applications need. The rule requiring truncation on storage is more of a hope than a realistic requirement.
On a side note, Java originally required that all floating-point math be done at the exact precision required by the types involved. You can do that on Intel hardware by telling it not to extend fp types to 80 bits. This was met with loud complaints from number crunchers because that makes calculations much slower. Java soon changed to the notion of "strict" fp and "non-strict" fp, and serious number crunching uses non-strict, i.e., make it as fast as the hardware supports. People who thoroughly understand floating-point math (that does not include me) want speed, and know how to cope with the differences in precision that result.
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
 |Â
show 1 more comment
up vote
4
down vote
Following up on the comment by @NathanOliver -- compilers are allowed to do floating-point math at higher precision than the types of the operands require. Typically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardware. It's only when a value is stored that it has to be reverted to the actual precision of the type. And even then, most compilers by default will do optimizations that violate this rule, because forcing that change in precision slows down the floating-point operations. Most of the time that's okay, because the extra precision isn't harmful. If you're a stickler, you can use a command-line switch to force the compiler to honor that storage rule, and you might see that your floating-point calculations are significantly slower.
In that function, marking the variable volatile
tells the compiler that it cannot elide storing that value; that, in turn, means that it has to reduce the precision of the incoming value to match the type that it's being stored in. So the hope is that this would force truncation.
And, no, writing a cast instead of calling that function is not the same, because the compiler (in its non-conforming mode) can skip the assignment to y
if it determines that it can generate better code without storing the value, and it can skip the truncation as well. Keep in mind that the goal is to run floating-point calculations as fast as possible, and having to deal with niggling rules about reducing precision for intermediate values just slows things down.
In most cases, running flat-out by skipping intermediate truncations is what serious floating-point applications need. The rule requiring truncation on storage is more of a hope than a realistic requirement.
On a side note, Java originally required that all floating-point math be done at the exact precision required by the types involved. You can do that on Intel hardware by telling it not to extend fp types to 80 bits. This was met with loud complaints from number crunchers because that makes calculations much slower. Java soon changed to the notion of "strict" fp and "non-strict" fp, and serious number crunching uses non-strict, i.e., make it as fast as the hardware supports. People who thoroughly understand floating-point math (that does not include me) want speed, and know how to cope with the differences in precision that result.
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
 |Â
show 1 more comment
up vote
4
down vote
up vote
4
down vote
Following up on the comment by @NathanOliver -- compilers are allowed to do floating-point math at higher precision than the types of the operands require. Typically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardware. It's only when a value is stored that it has to be reverted to the actual precision of the type. And even then, most compilers by default will do optimizations that violate this rule, because forcing that change in precision slows down the floating-point operations. Most of the time that's okay, because the extra precision isn't harmful. If you're a stickler, you can use a command-line switch to force the compiler to honor that storage rule, and you might see that your floating-point calculations are significantly slower.
In that function, marking the variable volatile
tells the compiler that it cannot elide storing that value; that, in turn, means that it has to reduce the precision of the incoming value to match the type that it's being stored in. So the hope is that this would force truncation.
And, no, writing a cast instead of calling that function is not the same, because the compiler (in its non-conforming mode) can skip the assignment to y
if it determines that it can generate better code without storing the value, and it can skip the truncation as well. Keep in mind that the goal is to run floating-point calculations as fast as possible, and having to deal with niggling rules about reducing precision for intermediate values just slows things down.
In most cases, running flat-out by skipping intermediate truncations is what serious floating-point applications need. The rule requiring truncation on storage is more of a hope than a realistic requirement.
On a side note, Java originally required that all floating-point math be done at the exact precision required by the types involved. You can do that on Intel hardware by telling it not to extend fp types to 80 bits. This was met with loud complaints from number crunchers because that makes calculations much slower. Java soon changed to the notion of "strict" fp and "non-strict" fp, and serious number crunching uses non-strict, i.e., make it as fast as the hardware supports. People who thoroughly understand floating-point math (that does not include me) want speed, and know how to cope with the differences in precision that result.
Following up on the comment by @NathanOliver -- compilers are allowed to do floating-point math at higher precision than the types of the operands require. Typically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardware. It's only when a value is stored that it has to be reverted to the actual precision of the type. And even then, most compilers by default will do optimizations that violate this rule, because forcing that change in precision slows down the floating-point operations. Most of the time that's okay, because the extra precision isn't harmful. If you're a stickler, you can use a command-line switch to force the compiler to honor that storage rule, and you might see that your floating-point calculations are significantly slower.
In that function, marking the variable volatile
tells the compiler that it cannot elide storing that value; that, in turn, means that it has to reduce the precision of the incoming value to match the type that it's being stored in. So the hope is that this would force truncation.
And, no, writing a cast instead of calling that function is not the same, because the compiler (in its non-conforming mode) can skip the assignment to y
if it determines that it can generate better code without storing the value, and it can skip the truncation as well. Keep in mind that the goal is to run floating-point calculations as fast as possible, and having to deal with niggling rules about reducing precision for intermediate values just slows things down.
In most cases, running flat-out by skipping intermediate truncations is what serious floating-point applications need. The rule requiring truncation on storage is more of a hope than a realistic requirement.
On a side note, Java originally required that all floating-point math be done at the exact precision required by the types involved. You can do that on Intel hardware by telling it not to extend fp types to 80 bits. This was met with loud complaints from number crunchers because that makes calculations much slower. Java soon changed to the notion of "strict" fp and "non-strict" fp, and serious number crunching uses non-strict, i.e., make it as fast as the hardware supports. People who thoroughly understand floating-point math (that does not include me) want speed, and know how to cope with the differences in precision that result.
answered 32 mins ago
Pete Becker
55.6k439113
55.6k439113
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
 |Â
show 1 more comment
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
People who thoroughly understand floating-point math And that does not include me as well. Thanks for taking my blurb and turning into a coherent answer +1
â NathanOliver
28 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
âÂÂTypically on x86 that means that they do everything as 80-bit values, because that's the most efficient in the hardwareâ is dubious. Some compilers may have used the 80-bit floating-point registers in the past, but processor designs have moved on, and there are now disadvantages to using those old registers and their operations.
â Eric Postpischil
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@EricPostpischil I have yet to see an conforming implementation. A lot of time they will be nonconforming to perform better, with a flag that will do what the standard says, but at a cost to you.
â NathanOliver
23 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@NathanOliver: Please show us a code sample with an implementation that does not remove the excess precision when a cast or assignment is performed along with the assembly generated by that implementation.
â Eric Postpischil
21 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
@EricPostpischil -- you're right that 80-bit stuff is neanderthal era. I keep forgetting that what I learned 20 years ago is not necessarily state-of-the-art.
â Pete Becker
19 mins ago
 |Â
show 1 more comment
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53118960%2fcan-a-static-castfloat-from-double-assigned-to-double-be-optimized-away%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
No there's no difference. As for the reasons, that's really not something we can speculate about (especially without any more context). You have to ask the original author for that.
â Some programmer dude
1 hour ago
Note that I'm talking for C. Volatile keyword indicates that variable musn't cached in the registers. Unless you are expecting some external event that will change memory region your variable lies it is meaningless as far as I know.
â thoron
1 hour ago
1
I have no idea why the author of the code used a
volatile
variable. The function is no different fromfloat coerceToFloat(double x) return static_cast<float>(x);
as far as I am aware.â NathanOliver
1 hour ago
1
I mean, it's good practice to give this operation a name.
coerceToFloat
is certainly a lot more explicit about the intent than a plain static cast. The volatile... Hm. Maybe for debugging?â Max Langhof
1 hour ago
3
I may have found a bread crumb. This says using
volatile
can break up floating point operations. Maybe the author used it for the same reason, it forces the compiler to truncate here, instead of optimizing it away and not spitting out an intermediate result.â NathanOliver
53 mins ago