So I am currently training some deep learning models for some basic classification problems, and I am trying to figure out if it is possible to change the output size of the model in case I want to retrain my model with a different datasets in which the number of classes is different.
I have seen a lot of "retraining" posts but I don't get totally how is this internally occurring too. Are all the weights being updated with the new dataset except for the ones in the last layer that connects the previous last connection to a new output layer with a different amount of neurons. I also saw that is sometimes common to retrain only the last layer. How is this done? And how would this be coded for a model in Pytro