Saturday, September 1, 2012

Doubles and Decimals in C#

While I say C#, from what I’ve researched on floating point types so far, this concept could apply to most languages.

If you’ve ever programmed using double type values in your program I can safely say at this point that you are probably doing it wrong. You may never notice it but doing work with doubles is at some point going to give you an error that you may never notice until the day some critical function is supposed to happen based on a value and then you find the critical function isn’t getting triggered. Murphy’s law. Believe in it.

Where does this spawn from? While building a system for very very basic data analysis, I needed an even more basic addition of numbers to ensure that the total of values in a column was equal to 100 before allowing the user to progress to the next step. I was testing the program out and everything seemed fine, and just as I was about to roll the system out, I had a few more values to change in the database. Instead of doing it manually I thought I’d test the system again and do it through there. (Why is this important? Because it’s incredible where some things can go unnoticed through several hours of complete testing). While all this time I had been adding whole numbers, in this case I needed to actually add decimal numbers. Somehow, even though a basic on paper calculation gave me 100, the program was saying values do not add up to a 100. In debug mode, I discovered that at one point, 22.4+23.7 was giving me 46.9999999939. What. The. Debug??

Reading up through the documentation and several threads on the internet enlightened me that double values aren’t meant to be precise. They are meant for speed. They can be affected by the strangest things such as directX interfering with the bits. It’s scary, but it is what it is. The solution? Use decimal types. Almost always, you are going to use the slightly heavier but absolutely precise decimal types. DO NOT use floating point type variables.

But that begs the question. What are double types useful for? Sure, they are faster but where would you use them? Essentially in anything that needs speed while allowing for a very small percentage of error. Redrawing sprites based on their screen positions can use double to store the vector coordinates. Believe it or not, extremely large dataset mining, to get a general trend is a likely application. But essentially, for us who write applications for everyday use, we need precise values and therefore keep it in mind, decimal types.

No comments:

Post a Comment