Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With the universal approximation theorem, we can use artificial neutral networks with ReLUs to accurately approximate a function. But we just get weights out of it at the end. This approach feels similar, but provides code in the end.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: