I'm developing a model based on neural network principals. I have an entry layer, weights and an output layer:
[1,2] -- [ [1,1] , [1,1] ] --> [3,3]
My question is whether Python has a simple way (with numpy) to compute the output layers without doing loops and loops.
The current implementation is:
for i in range(0,number_of_out_neurons):
out_neuron_adder_toWrap = weights[i] * all_input_layer
out_neuron[i] = sum(out_neuron_adder) <-- wrapping

inputmultiplied by eachweight, you can just do adot-product. But this will only work if the input dimensions work with the weight dimensions