Numbers
JavaScript specification makes a clear differentiation between three Number related terms/concepts:
- Number value - the number representation "corresponding to a double-precision 64-bit binary format IEEE 754-2008 value"
- Number type - "set of all possible Number values including the special Not-a-Number (NaN) value, positive infinity and negative infinity"
- Number object - "member of the Object type that is an instance of the standard built-in Number constructor"
We will use Number interchangeably to refer either the representation, value or the Number constructor.
An introductory note regarding JavaScript internals related to Numbers:
- All numbers are internally handled and stored as double-precision 64-bit binary format, following the IEEE 754-2008 specification (yes, in JavaScript every number is a floating number)
NaN
,+Infinity
(the same thanInfinity
) and-Infinity
are allNumber
stypeof NaN; // 'number' typeof +Infinity; // 'number' +Infinity === Infinity; // true typeof -Infinity; // 'number'
- JavaScript has both
+0
(or0
) and-0
: but they are the same0 === -0 // true
From String to Number
Although the user input that reaches the server over HTTP is handled as String, internally, applications may expect numeric values (e.g. user's age).
Looking at Number constructor properties, we find the Number.parseInt and Number.parseFloat 1 methods - both expect a String argument, returning respectively an integer and float numbers.
Number.parseInt also accepts a radix, which when not specified or undefined, defaults to 10 (decimal base), except for when the String argument begins with 0x or 0X in, which case a radix of 16 (hexadecimal base) is assumed.
How the expected String argument is parsed is fully detailed in the specification. However, issues may arise. Here’s how it works:
Requesting the User's Age
We're expecting an integer. But over HTTP, the value will arrive as a String. Let us parse it.
const userInputAge = '32';
const userAge = Number.parseInt(userInputAge);
console.log('User is %d years old', userAge);
// User is 32 years old
No problem - but what if user's input looks like '32 years old'?
const userInputAge = '32 years old';
const userAge = Number.parseInt(userInputAge);
console.log('User is %d years old', userAge);
// User is 32 years old
Despite the fact that the input String is alphanumeric, Number.parseInt returns its 'integer part' (as leading white spaces are removed). However, if the first character after removing any leading white spaces is other than - (HYPHEN-MINUS), + (PLUS SIGN) or a digit, we will get NaN - Not-a-Number.
const userInputAge = 'thirty 2';
const userAge = Number.parseInt(userInputAge);
console.log('User is %d years old', userAge);
// User is NaN years old
There were no parsing errors, but we know that we are not ready to go with
user's age. Testing whether the Number.parseInt(userInputAge)
result is a
Number
won't suffice as NaN
is itself a Number
.
const userInputAge = 'thrity 2';
const userAge = Number.parseInt(userInputAge);
if (typeof userAge !== 'number') {
throw new Error('invalid age');
}
console.log('User is %d years old', userAge);
// User is NaN years old
Let's enforce that the parsed user's age is in fact an integer bigger than zero
const userIntputAge = 'thirty 2';
const userAge = Number.parseInt(userInputAge);
if (!Number.isInteger(userAge) || userAge <= 0) {
throw new Error('invalid age');
}
console.log('User is %d years old');
Note: Number.isInteger
returns false
for Infinity
/-Infinity
.
The user's age upper limit was omitted for code sample briefness.
Now that we've received a validation error, it may look safe but... What if user provides his age using hexadecimal base (well, at least he will look younger)?
const userInputAge = '0x20';
const userAge = Number.parseInt(userInputAge);
if (!Number.isInteger(userAge) || userAge <= 0) {
throw new Error('invalid age');
}
console.log('You are %d years old', userAge);
// User is 32 years old
Surprisingly or not, 0x20
did validate as integer. Why?
In fact, Number.parseInt('0x20');
returns the integer number 32
:
although the '0x20' string is parsed as hexadecimal due to the '0x' prefix
(0X
is also valid). Internally, all numbers are stored as decimals.
As we said before, Number.parseInt
accepts a radix as second argument. So,
to enforce userInputAge
to be given as a decimal number, we just have to
provide a radix equal to 10
const userInputAge = '0x20';
const userAge = Number.parseInt(userInputAge, 10);
if (!Number.isInteger(userAge) || userAge <= 0) {
throw new Error('invalid age');
}
console.log('You are %d years old', userAge);
And as expected, we have the validation error.
Even at this point, we can't be sure that what was entered was a decimal integer
number, providing 32,5
will end up being parsed as 32
const userInputAge = '32,5';
const userAge = Number.parseInt(userInputAge, 10);
if (!Number.isInteger(userAge) || userAge <= 0) {
throw new Error('invalid age');
}
console.log('You are %d years old', userAge);
// You are 32 years old
Requesting Weight
Weight is a good example of a float number, so let's ask users to input theirs.
const userInputWeight = '80.5';
const userWeight = Number.parseFloat(userInputWeight);
console.log('User\'s weight is %d Kg', userWeight);
// User's weight is 80.5 Kg
Depending on user's location2, one should use ,
(comma) as a decimal
separator. What difference does it make?
const userInputWeight = '80,5';
const userWeight = Number.parseFloat(userInputWeight);
console.log('User\'s weight is %d Kg', userWeight);
// User's weight is 80 Kg
Exactly 0.5
Kg (Number.parseFloat
returns 80
): per the specification,
Number.parseFloat
uses .
(dot) as decimal separator.
Type Coercion
Quite often, String
to Number
conversion is done using the
Unary +
Operator, forcing type coercion
+''; // 0
+'0'; // 0
+'-0'; // -0
+'NaN'; // NaN
+' 1'; // 1
+'-1'; // -1
+'0.1'; // 0.1
However this may not lead to the expected results
type coercion and
Number.parseFloat
inconsistencyconst userInput = '80,5'; parseFloat(userInput); // 80 +userInput; // NaN
octal representation
const octal = 012; const octalString = '012'; +octal; // 10 +octalString; // 12
To get the expected result,
octalString
should be equal to0o12
const octalString = '0o12'; +octalString; // 10
Safe Integer
ECMAScript 2015 (6th Edition) introduces the concept of "Safe Integer" - an integer that "can be exactly represented as an IEEE-754 double precision number and whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation" (source).
Why is this important? Let's have a look at some simple integer arithmetic
const N = 9007199254740992;
N + 1; // 9007199254740992
N + 2; // 9007199254740994
Is it 9007199254740992
safe?
Number.isSafeInteger(9007199254740992); // false
Again, why is this so important?
const MIN = 9007199254740992;
const MAX = 9007199254740994;
for (let i = MIN; i < MAX; i++) {
console.log(i);
}
Yes, this is an infinite loop. MIN
is not a "Safe Integer". In fact, the
last "Safe Integer" is exactly MIN - 1
which you can get from
Number.MAX_SAFE_INTEGER
(2⁵³-1), although there's no representation for
MIN
and we're doing an integer operation, JavaScript won't show any error.
You may expect that Number.MAX_SAFE_INTEGER
is the highest number that
JavaScript can handle, but no, Number.MAX_VALUE
is the highest one:
Number.MAX_VALUE > Number.MAX_SAFE_INTEGER; // true
Number.MAX_VALUE; // 1.7976931348623157e+308
Division by Zero
Don't worry, you'll never get close to a "division by zero" error. Instead you will get... infinity as per the IEEE 754-2008 standard
1/0; // Infinity
-1/0; // -Infinity
Precision
This is not a JavaScript-only problem. In fact, this is an issue you will find in most programming languages as it is a limitation of the already mentioned IEEE 754 specification. It is better to be aware of this as rounding errors may lead to rockets missing theirs targets3
0.1+0.2; // 0.30000000000000004
Converting
To boolean
JavaScript has a native Boolean type, consisting of primitive true
and
false
values. Nevertheless, some Number
s evaluate to false
and others to
true
-1? true : false; // true
1? true : false; // true
Infinity? true : false; // true
-Infinity? true : false; // true
0? true : false; // false
-0? true : false; // false
NaN? true : false; // false
The conversion can be done using double logical - NOT operator !
const number = -1;
if (!!number === true) {
console.log('true');
} else {
console.log('false');
}
To String
Type coercion is commonly used to convert a number into string and it does the trick when you're using a decimal base
''+1; // '1'
''+0.1; // '0.1'
''+Math.pow(4,3); // '64'
But if you're using a non-decimal base like Octal or Hexadecimal, the result may not be what you're expecting as you will get a string representation of number's decimal representation;
''+012; // '10'
''+0xA; // '10'
To avoid mistakes, use always the same pattern when getting a textual
representation of a Number
- use the Number.toString() method,
specifying the radix (if not present or undefined, the Number
10 is used
by default)
const n1 = 10;
const n2 = 012;
const n3 = 0xA;
n1.toString(); // '10'
n1.toString(10); // '10'
n1.toString(undefined); // '10';
n2.toString(8); // '12';
n3.toString(16); // 'a';
This is also the close you have to a decimal bases conversion, as you can get an octal representation from a decimal integer or from an hexadecimal value;
const decimalInt = 10;
const hexValue = 0xA;
console.log(decimalInt.toString(8)); // '12';
console.log(hexValue.toString(8)); // '12';
Conclusion
As we said before, over HTTP, what you get on server-side is always a String.
Because of that, before converting String
to Number
- always validate the input against a "white" list of allowed characters
(e.g. for decimal integers
^(0|[1-9][0-9]*)$
- Then, validate for expected data types (e.g. parsing
String
toNumber
) - Finally, validate data range
You can read more about How numbers are encoded in JavaScript by Dr. Axel Rauschmayer at 2ality.com.
1. The implementation is shared with globalsparseInt()
andparseFloat()
functions:parseInt === Number.parseInt && parseFloat === Number.parseFloat // true
) ↩
2. "In computing, a locale is a set of parameters that defines the user's language, region and any special variant preferences that the user wants to see in their user interface." (source) ↩
3. https://en.wikipedia.org/wiki/MIM-104_Patriot#Failure_at_Dhahran ↩