252

I have a data frame that stores store name and daily sales count. I am trying to insert this to Salesforce using the Python script below.

However, I get the following error:

TypeError: Object of type 'int64' is not JSON serializable

Below, there is the view of the data frame.

Storename,Count
Store A,10
Store B,12
Store C,5

I use the following code to insert it to Salesforce.

update_list = []
for i in range(len(store)):
    update_data = {
        'name': store['entity_name'].iloc[i],
        'count__c': store['count'].iloc[i] 
    }
    update_list.append(update_data)

sf_data_cursor = sf_datapull.salesforce_login()
sf_data_cursor.bulk.Account.update(update_list)

I get the error when the last line above gets executed.

How do I fix this?

5
  • 1
    That call to range is suspicious. You are taking len(store) and wrapping that in a tuple, and then calling range on the tuple. If you remove one set of parentheses, does it fix the code? That is, try this: for i in range(len(store)):. Commented Jun 18, 2018 at 20:00
  • 2
    @TimJohns A pair of parentheses around a number does not make it a tuple. (34) is still a number 34. But (34,) is a tuple. Commented Jun 18, 2018 at 20:04
  • @DyZ Good point, I didn't realize that parentheses with a single argument are treated differently than if there are multiple arguments. Commented Jun 18, 2018 at 20:13
  • 4
    @TimJohns The parens are irrelevant. a=34, is also a tuple Commented Feb 20, 2020 at 15:27
  • There is an open bug report about this issue: bugs.python.org/issue24313 Commented Jul 22, 2020 at 20:59

15 Answers 15

250

You can define your own encoder to solve this problem.

import json
import numpy as np

class NpEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        if isinstance(obj, np.floating):
            return float(obj)
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        return super(NpEncoder, self).default(obj)

# Your codes .... 
json.dumps(data, cls=NpEncoder)
Sign up to request clarification or add additional context in comments.

6 Comments

@IvanCamilitoRamirezVerdes That's exactly what I was looking for so was quite helpful, for me. I like the readability of throwing in an encoding class to the json.dumps function as well.
I also needed to add elif isinstance(obj, np.bool_): return bool(obj)
Ive edited this answer to remove the unnecessary "elifs" since there are returns.
For most cases, this is quite an overkill. The solution by Tharindu Sathischandra is more straightforward.
What a shame that Python does not offer JSON Converter for any type it uses. Python feels like 2 centuries ago... :-(
|
222

json does not recognize NumPy data types. Convert the number to a Python int before serializing the object:

'count__c': int(store['count'].iloc[i])

Comments

49

I'll throw in my answer to the ring as a bit more stable version of @Jie Yang's excellent solution.

My solution

numpyencoder and its repository.

from numpyencoder import NumpyEncoder

numpy_data = np.array([0, 1, 2, 3])

with open(json_file, 'w') as file:
    json.dump(numpy_data, file, indent=4, sort_keys=True,
              separators=(', ', ': '), ensure_ascii=False,
              cls=NumpyEncoder)

The breakdown

If you dig into hmallen's code in the numpyencoder/numpyencoder.py file you'll see that it's very similar to @Jie Yang's answer:


class NumpyEncoder(json.JSONEncoder):
    """ Custom encoder for numpy data types """
    def default(self, obj):
        if isinstance(obj, (np.int_, np.intc, np.intp, np.int8,
                            np.int16, np.int32, np.int64, np.uint8,
                            np.uint16, np.uint32, np.uint64)):

            return int(obj)

        elif isinstance(obj, (np.float_, np.float16, np.float32, np.float64)):
            return float(obj)

        elif isinstance(obj, (np.complex_, np.complex64, np.complex128)):
            return {'real': obj.real, 'imag': obj.imag}

        elif isinstance(obj, (np.ndarray,)):
            return obj.tolist()

        elif isinstance(obj, (np.bool_)):
            return bool(obj)

        elif isinstance(obj, (np.void)): 
            return None

        return json.JSONEncoder.default(self, obj)

4 Comments

Not working in my case. I have this error: TypeError: keys must be str, int, float, bool or None, not bool_
Nice solution, but doesn't work with complex arrays for me.
@Andrew What specific type of complex array are you trying to map and to what types would you like it to map? If you add your own elif statement for complex arrays it should work!
@Jason R Stevens CFA: e.g.: np.array([1 + 1j, 2 + 2j]). Sure, an elif would work, too. I'll added a more generic method for numpy here in the answers, so you don't have to add the various numpy types manually. stackoverflow.com/a/73634275/15330539
47

Actually, there is no need to write an encoder, just changing the default to str when calling the json.dumps function takes care of most types by itself so in one line of code:

json.dumps(data, default=str)

From the docs of json.dumps and json.dump: https://docs.python.org/3/library/json.html#json.dump

If specified, default should be a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError. If not specified, TypeError is raised.

So calling str converts the numpy types (such as numpy ints or numpy floats) to strings that can be parsed by json. If you have numpy arrays or ranges, they have to be converted to lists first though. In this case, writing an encoder as suggested by Jie Yang might be a more suitable solution.

2 Comments

Excellent solution. However, one drawback is that numpy arrays are returned without delimiter, e.g. '[0 1 3]'. As a json list usually has a comma as delimiter that could be added in a second step, though.
Slight improvement, if all types are known (except int64), then coercing to int means your numbers remain numbers in the JSON (instead of becoming strings): json.dump(vocab, outfile, default=int)
35

A very simple numpy encoder can achieve similar results more generically.

Note this uses the np.generic class (which most np classes inherit from) and uses the a.item() method.

If the object to encode is not a numpy instance, then the json serializer will continue as normal. This is ideal for dictionaries with some numpy objects and some other class objects.

import json
import numpy as np

def np_encoder(object):
    if isinstance(object, np.generic):
        return object.item()

json.dumps(obj, default=np_encoder)

10 Comments

Short and concise.
This didn't work when I had a 0-dim array like np.array(1). Simple fix: if isinstance(object, (np.generic, np.ndarray))
Shouldn't this raise TypeError if the condition is false? Or else it implicitly returns None which json will map to null.
Since this is treated as a json dumps default, if nothing is returned, the encoder proceeds as normal. In this case we specifically only attempt to serialize numpy objects in this way. Non numpy objects are serialized using the normal json dumps process. This is a great way to serialize nested dictionaries of mixed object types that may include numpy objects.
This is more generic and short solution and worked for me.
|
18

If you are going to serialize a numpy array, you can simply use ndarray.tolist() method.

From numpy docs,

a.tolist() is almost the same as list(a), except that tolist changes numpy scalars to Python scalars

In [1]: a = np.uint32([1, 2])

In [2]: type(list(a)[0])
Out[2]: numpy.uint32

In [3]: type(a.tolist()[0])
Out[3]: int

2 Comments

This was my cause and simple fix. infact writing encoder to to convert np array to list is overkill. thanks
Does work for float, int, and bool but not with datetime, complex types in combination with json.dump(s)
6

This might be the late response, but recently i got the same error. After lot of surfing this solution helped me.

def myconverter(obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        elif isinstance(obj, datetime.datetime):
            return obj.__str__()

Call myconverter in json.dumps() like below. json.dumps('message', default=myconverter)

2 Comments

or you can use elif isinstance(obj, (datetime.date, datetime.datetime)): return obj.isoformat()
or even if isinstance(obj, datetime.date): return obj.isoformat() datetime.datetime is a subclass of datetime.date
6

Here's a version that handles bools and NaN values-which are not part of JSON spec-as null.

import json
import numpy as np

class NpJsonEncoder(json.JSONEncoder):
  """Serializes numpy objects as json."""

  def default(self, obj):
    if isinstance(obj, np.integer):
      return int(obj)
    elif isinstance(obj, np.bool_):
      return bool(obj)
    elif isinstance(obj, np.floating):
      if np.isnan(obj):
        return None  # Serialized as JSON null.
      return float(obj)
    elif isinstance(obj, np.ndarray):
      return obj.tolist()
    else:
      return super().default(obj)

# Your code ... 
json.dumps(data, cls=NpEncoder)

Comments

3

There are excellent answers in this post, suitable for most cases. However, I needed a solution that works for all numpy types (e.g., complex numbers) and returns json conform (i.e., comma as the list separator, non-supported types converted to strings).

Test Data

import numpy as np
import json

data = np.array([0, 1+0j, 3.123, -1, 2, -5, 10], dtype=np.complex128)
data_dict = {'value': data.real[-1], 
             'array': data.real,
             'complex_value': data[-1], 
             'complex_array': data,
             'datetime_value': data.real.astype('datetime64[D]')[0],
             'datetime_array': data.real.astype('datetime64[D]'),
           }

Solution 1: Updated NpEncoder with Decoding to numpy

JSON natively supports only strings, integers, and floats but no special (d)types such as complex or datetime. One solution is to convert those special (d)types to an array of strings with the advantage that numpy can read it back easily, as outlined in the decoder section below.

class NpEncoder(json.JSONEncoder):
    def default(self, obj):
        dtypes = (np.datetime64, np.complexfloating)
        if isinstance(obj, dtypes):
            return str(obj)
        elif isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            if any([np.issubdtype(obj.dtype, i) for i in dtypes]):
                return obj.astype(str).tolist()
            return obj.tolist()
        return super(NpEncoder, self).default(obj)

# example usage
json_str = json.dumps(data_dict, cls=NpEncoder)
# {"value": 10.0, "array": [0.0, 1.0, 3.123, -1.0, 2.0, -5.0, 10.0], "complex_value": "(10+0j)", "complex_array": ["0j", "(1+0j)", "(3.123+0j)", "(-1+0j)", "(2+0j)", "(-5+0j)", "(10+0j)"], "datetime_value": "1970-01-01", "datetime_array": ["1970-01-01", "1970-01-02", "1970-01-04", "1969-12-31", "1970-01-03", "1969-12-27", "1970-01-11"]}

Decoding to numpy

Special (d)types must be converted manually after loading the JSON.

json_data = json.loads(json_str)

# Converting the types manually
json_data['complex_value'] = complex(json_data['complex_value'])
json_data['datetime_value'] = np.datetime64(json_data['datetime_value'])

json_data['array'] = np.array(json_data['array'])
json_data['complex_array'] = np.array(json_data['complex_array']).astype(np.complex128)
json_data['datetime_array'] = np.array(json_data['datetime_array']).astype(np.datetime64)

Solution 2: Numpy.array2string

Another option is to convert numpy arrays or values to strings numpy internally, i.e.: np.array2string. This option should be pretty robust, and you can adopt the output as needed.

import sys
import numpy as np

def np_encoder(obj):
    if isinstance(obj, (np.generic, np.ndarray)):
        out = np.array2string(obj,
                              separator=',',
                              threshold=sys.maxsize,
                              precision=50,
                              floatmode='maxprec')
        # remove whitespaces and '\n'
        return out.replace(' ','').replace('\n','')

# example usage
json.dumps(data_dict, default=np_encoder)
# {"value": 10.0, "array": "[0.,1.,3.123,-1.,2.,-5.,10.]", "complex_value": "10.+0.j", "complex_array": "[0.+0.j,1.+0.j,3.123+0.j,-1.+0.j,2.+0.j,-5.+0.j,10.+0.j]", "datetime_value": "'1970-01-01'", "datetime_array": "['1970-01-01','1970-01-02','1970-01-04','1969-12-31','1970-01-03','1969-12-27','1970-01-11']"}

Comments:

  • all numpy arrays are strings ("[1,2]" vs. [1,2]) and must be read with a special decoder
  • threshold=sys.maxsize returns as many entries as possible without triggering summarization (...,).
  • With the other parameters (precision, floatmode, formatter, ...) you can adapt your output as needed.
  • For a compact JSON, I removed all whitespaces and linebreaks (.replace(' ','').replace('\n','')).

2 Comments

complex number is not a standard feature of JSON, so you might have to take some extra care to make sure that the deserializer understands your chosen format
That's correct, and the workaround is to convert the non-native JSON formats to strings. At the JSON import, the conversion to the non-native JSON formats can be then done manually. Also added the latest version I'm using (the first solution).
1

If you have this error

TypeError: Object of type 'int64' is not JSON serializable

You can change that specific columns with int dtype to float64, as example:

df = df.astype({'col1_int':'float64', 'col2_int':'float64', etc..})

Float64 is written fine in Google Spreadsheets

1 Comment

the question does not mention pandas at all. Nor google spreatsheets
1

If you use plotly:

import plotly
json.dumps(data, cls=plotly.utils.PlotlyJSONEncoder)

Comments

0

Got an idea from the above answers and below code works for me,

    def convert_to_serializable(data):
        if isinstance(data, dict):
            return {key: self.convert_to_serializable(value) for key, value in data.items()}
        elif isinstance(data, list):
            return [self.convert_to_serializable(item) for item in data]
        elif isinstance(data, np.integer):
            return int(data)
        elif isinstance(data, np.floating):
            return float(data)
        else:
            return data

Comments

-1
update_data = {
    'name': str(store['entity_name'].iloc[i]),
    'count__c': str(store['count'].iloc[i]) 
}

1 Comment

Please explain your answer.
-2

If you have control over the creation of DataFrame, you can force it to use standard Python types for values (e.g. int instead of numpy.int64) by setting dtype to object:

df = pd.DataFrame(data=some_your_data, dtype=object)

The obvious downside is that you get less performance than with primitive datatypes. But I like this solution tbh, it's really simple and eliminates all possible type problems. No need to give any hints to the ORM or json.

Comments

-2

I was able to make it work with loading the dump.

Code:

import json

json.loads(json.dumps(your_df.to_dict()))

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.