Why might a programmer choose to represent a data item in IEEE binary128 floating-point format instead of

Question:

Why might a programmer choose to represent a data item in IEEE binary128 floating-point format instead of IEEE binary64 floating-point format? What additional costs might be incurred at runtime (when the application program executes) as a result of using the 128-bit instead of the 64-bit format?

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Systems Architecture

ISBN: 978-1305080195

7th edition

Authors: Stephen D. Burd

Question Posted: