Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not Apple adapters per-se, but LoRA adapters. It’s a way of fine tuning a model such that you keep the base weights unchanged but then keep a smaller set of tuned weights to help you on specific tasks.

(Edit) Apple is using them in their Apple Intelligence, hence the association. But the technique was around before.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: