What Every Data Scientist Should Know About Floating-Point Arithmetic

In 1991 David Goldberg at Xerox PARC published the seminal paper on floating point arithmetic titled “What every computer scientist should know about floating-point arithmetic.” This paper was especially influential in the 1990’s and early 2000’s when limitations in computer hardware drove people to operate in a regime that most exposes them to the limitations of floating point arithmetic, specifically using 32 bit floats for storing and calculating floating point numbers. With modern 64 bit architecture and low storage and RAM costs, the incentive to use 32 bit floats is no longer there and computer scientist generally use 64 bit double precision floats, which greatly reduces them to the issues of floating point arithmetic. However, 64 bit floats do not remove the issues, it just proverbially kicks the can down the road waiting for usage patterns to evolve to once again cause floating point issues to become prominent again. With Big Data and extremely large data sets, both data scientists and computer scientists need to Read More …