My recent experiences raise the question of the value of vintage ratings. My observations are that they are not very reliable, maybe even misleading.
For example, in my recent tastings, I found the 2001 wines quite good, whereas the 2002 Mount Edelstone was over the hill (pardon the pun). Initially, nobody talked about the quality of 2001, but wine writers raved about 2002. It is clear that warmer vintages, with more initial appeal and ripeness get higher ratings. This does not mean that they produce better wines.
A major example of this trend was the 2000 vintage in Piedmont. It initially received 100 points by Wine Spectator and was hailed the best vintage in Italy ever. At a tasting a little while ago, it was revised down to 93 points, as the wines were found a little forward and not so well structured as in other years.
For those who can remember, the same happened with 1990 in Australia. Later, many writers found wines from 1991 better structured (although it must be said that 1990 produced some excellent wines).
The other argument against vintage ratings, of course, is that there is a lot of variation between wineries and vineyards.
So why do we have vintage ratings and why do they seem to be influential? One reason, I suspect, is that winemakers often rate the last vintage the best ever and if the weather was hot, the grapes got in just before the heatwave and if the weather was cold, the grapes got extra hang time etc. You can't really blame the wineries, they need to market their product, but vintage ratings are probably not a good counter balance.
The best test is your own taste. Before you commit to a case, buy a bottle. And once you have gained some experience, you will be able to tell which characteristics might serve a wine well in the future.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment