Hi,
I have a Question related to AI-Engineering Chapter 8 (Chip Huyen) - Dataset Engineering (Page No. : 703): Below is the excerpt:
“You investigate and find that in the training data, there are several examples of annotations with unsolicited suggestions. You put in a request to remove these examples from the training data and another request to acquire new examples that demonstrate fact-checking without unsolicited rewriting.”
What is being suggested is an unlearning process. Is there a way to make a neural network forget/remove certain trained data after it has been trained. According to me what the above section suggest is: retraining the model from a previous checkpoint (before the corrupted data was used to train) with the new data. Any idea?