Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mcguire
on May 9, 2023
|
parent
|
context
|
favorite
| on:
Language models can explain neurons in language mo...
They also mention they got a score above 0.8 for 1000 neurons out of GPT2 (which has 1.5B (?)).
sebzim4500
on May 9, 2023
|
next
[–]
1.5B parameters, only 300k neurons. The number of connections is roughly quadratic with the number of neurons.
oofsa
on May 9, 2023
|
prev
[–]
I thought they had only applied the technique to 307,200 neurons. 1,000 / 307,200 = 0.33% is still low, but considering that not all neurons would be useful since they are initialized randomly, it's not too bad.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: