A literal translation of the Matlab code would be
import numpy as np
x = np.zeros((parts, 2))
for i in range(parts):
x[i,0] = i*L + 1
x[i,1] = (i+1)*L
Note that Matlab uses 1-based indexing while Python uses 0-based indexing. This accounts for the differences in where the 1's show up.
However, when using NumPy you'll get much better performance if you avoid element-by-element modification of an array. Instead, you should seek to express the calculation in terms of as few NumPy operators or function calls as you can which affect whole arrays at once. By do this, you off-load as much work as possible to NumPy's underlying fast C/Fortran-compiled function calls and reduce the amount of time spent executing slower Python code.
This usually means you want to avoid Python for-loops, since a loop implies there
will be lots of Python statements to be executed.
So, for example, a better way to express the above calculation would be
x = np.zeros((parts, 2))
x[:, 0] = np.arange(1, parts*L, L)
x[:, 1] = x[:, 0] + L - 1
Notice that the values in x are filled in using just 2 assignments. Each
assignment affects a whole column of x "all at once".
To give a sense of what a difference array-based operations make,
here is an (IPython) timeit test using parts = 10000, L = 3 :
In [16]: %%timeit
....: x = np.zeros((parts, 2))
x[:, 0] = np.arange(1, parts*L, L)
x[:, 1] = x[:, 0] + L - 1
10000 loops, best of 3: 51.9 µs per loop
In [17]: %%timeit
....: x = np.zeros((parts, 2))
for i in range(parts):
x[i,0] = i*L + 1
x[i,1] = (i+1)*L
100 loops, best of 3: 3.58 ms per loop