It's not irrelevant, since there is a minor area of ML called 'curriculum learning' which asks how to order examples to teach a ML algorithm most efficiently, and it comes up in some other contexts (for example, a boosting algorithm which focuses on optimizing performance on hard/misclassified cases can be seen as somewhat like curriculum learning, and there are variants of gradient descent which focus on hard examples rather than wasting time on cases where the NN can already get the right answer; and for 'active learning', you want to pick the example which will teach the algorithm the most), but there's not much you can take from known pedagogy at the moment and apply straight to NNs. Even the non-bullshit parts of education like spaced repetition have no clear analogues for tasks like 'train an RNN to write news headlines based an article text using this corpus of human-written headlines'.