Question: Why might a programmer choose to represent a data item in IEEE binary128 floating-point format instead of IEEE binary64 floating-point format? What additional costs might
Why might a programmer choose to represent a data item in IEEE binary128 floating-point format instead of IEEE binary64 floating-point format? What additional costs might be incurred at runtime (when the application program executes) as a result of using the 128-bit instead of the 64-bit format?
Step by Step Solution
3.41 Rating (154 Votes )
There are 3 Steps involved in it
The larger format increases both the range of values that can be r... View full answer
Get step-by-step solutions from verified subject matter experts
