7 August 2019
How predictable is brand when it comes to determining wine quality? Put another way, should the better known brands readily submit their wines to the potentially more objective world of blind tasting? I say “objective” because the reality is that wine is performance art, and even the best made examples don’t always perform as they should on the day. For that matter, neither do the best palates, or the best panels. When brand image is at risk, there’s a temptation to preserve the investment in brand-building and risk the absence of an independent endorsement.
I looked through my last two blind tastings with a view to determining whether the scores tallied in any way to the brand seedings. Partly this involves an analysis of price: what a wine costs should reflect the optimum revenue, given the perception of its value. In that sense it balances how the product is perceived, with how much of it is about.
Take sauvignon blanc for example: R135 retail is about as premium as you get (unless you are pursuing the tiny volume “icons” made more to position the top-end of a range than actually to sell any wine.) This is the price point of two wines, both tasted blind recently, and both of which scored within a point of each other. The first up is the Jordan Cold Fact 2018, scoring 88 on my system – which equates to about 91 on the more generous calibration used by wine publications. It’s a good score, and certainly vindicates the brand: you buy Jordan wines because you know they are well made, and expect each of the different varietal bottlings to finish in the top 15% of a serious line-up.
It was pipped (admittedly only by a point) by the 2016 Elgin Ridge 282 – a less well-known wine, and therefore, given the pricing, under some pressure to over-deliver at the price point. This is exactly what it did: it’s a bigger, more muscular sort of sauvignon, and its aromatic profile reflects its appellation. There are notes of capsicum and blackcurrant leaf, whiffs of mown hay. It’s not showing any age, although the extra two years in bottle have helped the integration.
The same slight difference separated two benchmark Pinotages, the Beyerskloof 2017 Reserve and the newly released Kanonkop 2017. The former scored 89 (equal to a magazine/Platter score of 92) and it had clearly been made in a more accessible style. It costs half as much as the Kanonkop, which serves partly to acknowledge that although Beyers Truter used to make Kanonkop’s wines before setting up his own very successful Pinotage-focussed cellar, the Kanonkop brand has greater appeal.
The Kanonkop, at 90 (equal to 93), offers more reasons for cellaring it, especially once you know what it is. At present it’s way too oaky, without the integration you would expect of a wine destined for relatively immediate consumption. On the other hand, it has fabulous and intense fruit, so it is safe to predict that over time the wood will not dominate the vinosity. You might well ask why it hasn’t garnered a higher score. The answer is that the oak doesn’t need to have been this robust to achieve the same result once it’s properly aged. When you have the strength and fire-power of Kanonkop Pinotage, you don’t have to shy away from a little polish.
These readily available examples confirm that several of the better known, highly branded, players are not resting on their laurels: they are making wines which live up to their marketing pretensions. What isn’t fully explained however is the marginal score differential, compared with the gaping price gap. The Kanonkop is better than the Beyerskloof, but it’s not twice as good. That’s where shortage, and long term brand value, both play a role: great vintages, produced over several decades, make the Kanonkop, of which only a finite amount is available, that much more desirable. It’s what producers aspire to, and what consumers indulge.
desirable. It’s what producers aspire to, and what consumers indulge.