4

I have a C library that I am wrapping in Python using ctypes. The C library contains many arrays (tens of thousands of arrays on the order of 5-100 elements each, plus a few much longer arrays) that I want to access as numpy arrays in Python. I thought that this would be straightforward using numpy.ctypeslib.as_array; however, when I profile my code using cProfile, I notice that it is much faster to use a Python loop to manually copy (!) data from the ctypes pointers to numpy arrays that I create on the Python side. Is ctypeslib.as_array known to be slow? - I would have been thought it would be much faster just to interpret some memory as a numpy array than to copy it element-by-element inside a Python loop.

6
  • 1
    Do you notice a discrepancy between the small and large arrays? For larger (1M element) arrays in my own experience its much, much faster. Commented Jul 26, 2013 at 19:43
  • Used with an array it creates an __array_interface__ property on the ctypes array type. That would have be done for each size/type of array. Used with a pointer it creates the interface on the pointer object itself. Then it returns array(obj, copy=False). Commented Jul 26, 2013 at 20:17
  • Related question: Getting data from ctypes array into numpy Commented Jul 27, 2013 at 8:47
  • @Bakuriu: But as_array now adds the __array_interface__, so that question is out of date. If the OP is using ctypes arrays instead of pointers, adding the interface only has to be done once for each size/type of array. Commented Jul 27, 2013 at 13:23
  • 1
    Yes, large arrays are converted plenty fast - but it appears that there is a lot of overhead when converting lots of small arrays. Commented Jul 30, 2013 at 16:40

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.