Although you have compressed the file itself, once you load it into python using Python's json package, you end up loading the entire thing into memory. Due to how Python works, if say the file is 100MB, you typically end up using a fair bit more. I recently observed that loading a 324MB JSON used up 1.5GB of memory.
Now, if the issue is storage, then compression is the way to go, however, if you're needing to run it into a program you'd probably want to think about how to read the JSON one object at a time as supposed to load the entire thing into memory.
What @amirouche has suggested should work if you're happy to do it "by hand" for go it. For something already available, https://pypi.org/project/json-lineage/ might be a possible solution. Disclaimer, I did write the code for this.
I'm sure there are other tools out there that do the same - read JSON one object at a time.
if you do end up using json-lineage, here is a small guide that could do the trick for you:
from json_lineage import load
jsonl_iter = load("path/to/file.json")
for obj in jsonl_iter:
do_something(obj)
.
BZ2Filehas areadmethod that returns an arbitrary number of bytes, I would probably consider trying to read the json as a stream, with something like pypi.python.org/pypi/ijson