The script appears to rely on jQuery (which is presumably already included in the pages being scraped in this case). If you're already familiar with using jQuery for DOM manipulation, then using it for scraping is incredibly easy.
One advantage is that it's not always instantly obvious if you'll need JS to execute before you can scrape a page. If you start out with a simply html parser and then find out that you needed the JS to run first, you're going to have to start over. If you start out using phantomJS and then find out that you don't need any of the original JS to run, your script still works.
When you say 'rely on jQuery', I think it would be more precise to say that it relies on CSS selectors. Most libraries will actually provide you a way to access an HTML parse tree in that way.
I guess phantomjs is a good a tool as any, but there is really no need to evaluate Javascript for a bit of plain HTTP+HTML parsing.
It looks to me like it's using jQuery specific functions. It could be done with a simpler selector engine, but in this case, it looks pretty clear that it's either jQuery or a compatible library like Zepto.
It's CSS selectors and then wrapping the DOM element again in a function to give it an `attr` method, which is jQuery style. Other libraries may use that syntax too, but I'm pretty sure it started with jQuery (and if not, was certainly popularized by it).
One advantage is that it's not always instantly obvious if you'll need JS to execute before you can scrape a page. If you start out with a simply html parser and then find out that you needed the JS to run first, you're going to have to start over. If you start out using phantomJS and then find out that you don't need any of the original JS to run, your script still works.