I have a C library that I am wrapping in Python using ctypes. The C library contains many arrays (tens of thousands of arrays on the order of 5-100 elements each, plus a few much longer arrays) that I want to access as numpy arrays in Python. I thought that this would be straightforward using numpy.ctypeslib.as_array; however, when I profile my code using cProfile, I notice that it is much faster to use a Python loop to manually copy (!) data from the ctypes pointers to numpy arrays that I create on the Python side. Is ctypeslib.as_array known to be slow? - I would have been thought it would be much faster just to interpret some memory as a numpy array than to copy it element-by-element inside a Python loop.
__array_interface__property on the ctypes array type. That would have be done for each size/type of array. Used with a pointer it creates the interface on the pointer object itself. Then it returnsarray(obj, copy=False).as_arraynow adds the__array_interface__, so that question is out of date. If the OP is using ctypes arrays instead of pointers, adding the interface only has to be done once for each size/type of array.