I find it hard to convert pandas Series of size M each containing a numpy arrays each of size N into a matrix/numpy array/dataframe of size MxN
Example:
import pandas as pd
import numpy as np
from scipy import stats
d = pd.DataFrame({'grp': np.random.randint(1, 10, 1000), 'x':np.random.rand(1000,)})
s = d.groupby('grp')['x'].apply(lambda x: stats.gaussian_kde(x.values, bw_method = .01).evaluate(np.linspace(0,1,100)))
The output I get is of type Series where the type of the entries are numpy.ndarray. How do I convert this to be size of 10 (groups) times 100 (evaluation bins)?