Why does C#'s decimal use binary integer significand?

The new IEEE 128-bit decimal floating-point type https://en.wikipedia.org/wiki/Decimal128_floating-point_format specifies that the significand (mantissa) can be represented in one of two ways, either as a simple binary integer, or in densely packed decimal (in which case every ten bits represent three decimal digits).

C#’s decimal type predates this standard, but has the same idea. It went with binary integer significand.

On the face of it, this seems inefficient; for addition and subtraction, to line up the significands, you have to divide one of them by a power of ten; and division is the most expensive of all the arithmetic operators.

What was the reason for the choice? What corresponding advantage was considered worth that penalty?

Add Comment
1 Answer(s)

Choosing one representation is almost always about trade offs.

From here

A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII, Unicode, etc.) and BCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data.

Here you can find more about the relative performance.

Basically, it asserts your thoughts, that decimal significant is generally faster, but most operations show similar performance and binary even winning in division. Also keep in mind, since Intel mostly seems to rely on binary significants (i couldn’t find hints about other manufactures), they are more likely to get hardware support and than might beat decimals by a good margin.

Answered on July 16, 2020.
Add Comment

Your Answer

By posting your answer, you agree to the privacy policy and terms of service.