Not Apple adapters per-se, but LoRA adapters. It’s a way of fine tuning a model such that you keep the base weights unchanged but then keep a smaller set of tuned weights to help you on specific tasks.
(Edit) Apple is using them in their Apple Intelligence, hence the association. But the technique was around before.
(Edit) Apple is using them in their Apple Intelligence, hence the association. But the technique was around before.