- May 30, 2005
- 1,875
- 0
- 0
I know the backpropogation algorithm has been replaced by newer algorithms, however I have never formally studied neural networks and have only recently started looking into them as a hobby. Anyways, my question is whether a network trained using backpropogation is guarenteed to converge on learnable functions. For example, I wrote some code for a network (3 layer) and tried to train it (with 3 hidden nodes) to do the XOR function. It succeeds around 75% of the time however 25% of the time it fails and doesn't converge within 100,000 training trials. Is this a bug with my code or a property of the network?