No, they took a pre-existing term "Generative Pretraining"[0] and applied it to Transformers (a Google innovation [1]) to get Generative Pretrained Transformers [2]. Even if you looked at the full name, that paper doesn't use GPT or Generative Pretrained Transformers at all from what I can tell, this commenter [3] claims that the name was first used in the BERT paper.
[0] See this 2012 example: http://cs224d.stanford.edu/papers/maas_paper.pdf
[1] https://proceedings.neurips.cc/paper/2017/file/3f5ee243547de...
[2] https://cdn.openai.com/research-covers/language-unsupervised...
[3] https://news.ycombinator.com/item?id=39381802