04 Feb 2016, 03:37
The current neural network I've been training for days uses some prime text as the "ignition" for generating a predicted text. One of the phrases I use is: "the purpose of life is". A minute ago the NN generated the following:
"The purpose of life is somewhat orgasm."
When I "loosened" the output layer a bit, the answer changed to:
"The purpose of life is somewhat motivated by politicians"
It was getting interesting, so I continued with this:
"The purpose of self-bondage is possible to get free."
Bad grammar, but the idea is clear 😉
"The purpose of life is somewhat orgasm."
When I "loosened" the output layer a bit, the answer changed to:
"The purpose of life is somewhat motivated by politicians"
It was getting interesting, so I continued with this:
"The purpose of self-bondage is possible to get free."
Bad grammar, but the idea is clear 😉