Skip to content
Go back

Precision Issue of jq

Published:  at  10:29 AM

jq is a commonly used JSON processing tool in Linux. I often use it when writing data processing shell scripts or filtering API responses. Recently, I discovered a potential precision loss issue through a production problem.

Problem

A recent error occurred in our production service. Checking the logs, we found that an associated ID recorded in Table A of the database could not be found in Table B. The ID is 1721789858467004400 and is stored in a JSON string field in Table A, formatted as {"some_key": 1721789858467004400}. We confirmed that this ID indeed does not exist in Table B, but normally an invalid ID should not appear. So how did this ID come about?

Investigation

First, we found that although Table B does not contain 1721789858467004400, it does contain a similar ID 1721789858467004418. The former appears to be a truncated version of the latter with the last two digits rounded. This made us suspect that it was likely caused by precision loss.

Frontend

Our first intuition was that the ID might have been passed from the frontend. We know that the maximum integer that JavaScript’s Number type can accurately represent is 2^53. So if an integer exceeds 2^53, it must be passed as a string, otherwise precision loss will occur. This problematic ID 1721789858467004400 is indeed greater than 2^53. Following this idea, we reviewed all interfaces related to this ID and found that they were all read-only interfaces. The writing of this ID occurs during backend initialization, so it couldn’t have been modified directly by the frontend.

Backend

Could it have been modified in backend logic? Also unlikely. All logic related to this ID reads it directly from the database, sets it to a Java object, and stores it again—without modifying the ID. Moreover, it uses the Long type, so there should be no precision loss.

Script

With frontend and backend issues ruled out, the only possibility left was direct database operations. Considering the precision issue, it’s unlikely to be manually inserted—more likely it was inserted by a script. We reviewed shell scripts that operated on Table A in the past few months and found one script that did modify this field. It used jq to delete a deprecated key from this field and then wrote the result back:

# read field from db
new_field_value=$(jq 'del(.some_deprecated_key)' <<< "$field_value")
# update field into db

This seems fine—only a key was deleted, no other key was modified. But when we tested it manually, we found the issue:

jq 'del(.key_2)' <<< '{"key_1": 1721789858467004418, "key_2": 1721789858467004418}'

{
  "key_1": 1721789858467004400
}

The value of key_1, which wasn’t even modified, got truncated. The same issue appears even when simply parsing with jq:

jq <<< 1721789858467004418

1721789858467004400

Now it’s clear that the problem is caused by jq’s precision loss.

jq Precision Issue

Origin

There are quite a few GitHub issues about jq’s precision loss. Back in 2013, the jq maintainer explained it in detail in JSON does allow better than IEEE 754 numbers:

While the explanation is reasonable, many users are still surprised by the precision loss when jq handles int64.

Solution

The good news is that jq version 1.7 supports integer precision preservation (as long as no arithmetic is performed), finally patching this issue:

# precision is preserved
$ echo '100000000000000000' | jq .
100000000000000000
# comparison respects precision (this is false in JavaScript)
$ jq -n '100000000000000000 < 100000000000000001'
true
# sort/0 works
$ jq -n -c '[100000000000000001, 100000000000000003, 100000000000000004, 100000000000000002] | sort'
[100000000000000001,100000000000000002,100000000000000003,100000000000000004]
# arithmetic operations might truncate (same as JavaScript)
$ jq -n '100000000000000000 + 10'
100000000000000020

I checked and found our servers come with jq version 1.6. Replacing it with the 1.7 binary would completely avoid this issue.



Previous Post
Building a Personal Website with Astro