Jump to content

Suggestion

Suggestion

Hi Ricky,

This page looks good! Here are a few suggestions:

1. Some terminology used in this page is a little hard to understand, for instance, ReLU activation functions, I hope you can add more external links, so people does not need to google all the thing by themselves. 2. I can't understand why the input for each sample is the vector [xx2x3x4], would you mind to explain this to me?

The experiment results look good!

Sincerely,

Junyuan Zheng

JunyuanZheng (talk)18:47, 19 April 2016

Hey Junyuan,

I've added a link explaining ReLU. It's a simple function that removes all negative values: ReLU(x) = max(0,x).

The most basic method to learn a function for y=f(x) is to construct a neural network that approximates using y~=nn(x). This is a neural network with 1 input (x) and 1 output (y). This requires the neural network to learn properties of the function from only x. To learn a complex function, the network needs to be large. But to simply the network's job, I'm essentially giving it more information in the inputs. So the neural network learns an approximation y~=nn(x,x2,x3,x4), so it now has 4 inputs and 1 output.

TianQiChen (talk)22:34, 20 April 2016