By Craig Kolb, Acentric Marketing Research (Pty) LTD, 21 June 2019

CVA conjoint analysis is the oldest form of conjoint analysis, originally developed by Professor Paul E. Green in the 1970s. Over the years it has gone through various improvements and can still be purchased from software vendors (most notably IBM SPSS); however it is not as popular as it once was.

In this article I will outline why CVA conjoint analysis may be a better choice than many assume.

Note: CBC conjoint is also referred to as discrete choice modelling (DCM). It comes in various ‘flavours’.

 

The advent of smartphones

The increasing popularity of smartphones over the last decade has meant that including an entire choice set on the screen has become impractical. While respondents can conceivably scroll to see all of the options, this is likely frustrating given the number of choice sets in a typical CBC exercise. In such situations CVA, with its monadic – one at a time – ratings approach makes more sense.

Choice sets are not always more realistic than monadic exposure

Many real world choice situations are more realistically measured in a monadic way. While the choice sets of CBC might seem ideal in the world of FMCG, there are numerous industries where you are unlikely to have an array of competitors standing in right in front of you at the moment of choice. Decision making in these industries relies more heavily on memory. Examples include online stores, universities, cars, banks, insurance, software and housing.

The original reason CBC began to supplant CVA is not as relevant anymore

One of the original contentions of the developers of CBC conjoint analysis was that asking for choices provided ‘ratio scaled’ data – as opposed to CVA’s ratings, which are interval scaled. This is only partly true, since CVA uses decision rules such as BTL to rescale interval to ratio; however admittedly these decision rules do have an arbitrary quality, at least as originally conceived.

Greene eventually introduced a way of rescaling the results of a CVA analysis more accurately (that for whatever reason isn’t as widely known). His suggestion was likely based off of earlier work by Pessimier. Essentially holdout choice tasks are used to estimate a rescaling parameter that then allows CVA conjoint to more closely emulate choice probabilities. The parameter allows a better fit than the ‘one size fits all’ approach of the decision rules.

Comparable performance

While software developers may give the impression that CBC conjoint is clearly superior, academic research is far less clear; both in terms of parameter estimates and in terms of predictive validity.

In terms of parameters, Karniouchina et al. (2008) found only slight differences between parameter estimates – implying similar estimates of attribute importance. Indeed Karniouchina et al. (2008) concluded “This study, along with the other articles in this research stream, strongly suggests that in traditional conjoint tasks, the parameter estimates produced by RB and CB conjoint models are likely to be quite similar.”

In terms of predictive validity, a study by Elrod et. al (1992) of rental apartments demonstrated that CVA and CBC conjoint produced similar results, “both approaches predict holdout shares well…”.

Karniouchina et al. (2008) in a study of laptops found that ‘hit rates’ (percentage of times the hold-out choice matches predicted choice) at the individual level were better for a more complex form of CBC called hierarchical bayes (HB CBC); but no significant difference was found with segment level hit rates or aggregate level hit rates. In terms of predicted share of choice “no significant differences between the individual- or segment-level RB [i.e. CVA conjoint] and CB [i.e. CBC conjoint] models ” were found, while there was a significant difference at the aggregate level. A peculiarity of this study should be pointed out, in that HB was used for the individual level parameter estimates for CVA.

While hardly an exhaustive examination of academic studies on this topic, these few studies should make it clear that there is noting near a consensus on CBC being superior in predicting hold outs (preference share at the aggregate level). If the aim of your study is to predict market share, rather than preference share, then any differences in performance are even less relevant. When you consider that conjoint analysis is an incomplete model, in the sense that it ignores the promotional and accessibility aspects of the marketing mix and has modest external validity (without calibrating for these missing variables) it is doubtful any slight differences between CVA and CBC in terms of hold-out predictions matters much in practice.

CVA cognitive load is lower than CBC

CVA requires less effort on the respondent’s part, since respondents only need to evaluate one profile at a time. In contrast, CBC conjoint requires respondents to examine multiple profiles in each choice set before making a decision. Given how large these can become – I have seen some CBC choice sets run as large as eight profiles abreast – there is no doubt a large cognitive load on respondents, for very little in exchange in terms of information provided back (i.e. a single choice).

In total there are more profiles for a given number of attributes and levels in CBC conjoint analysis. So assuming CBC respondents pay as much attention to each profile, there load is far greater for the entire exercise. Of course, some simply don’t pay attention to all the profiles and provide poorer quality data in return.

Figure 1: CBC vs CVA layout

CVA versus CBC conjoint profile layout

CVA conjoint doesn’t require large samples in order to estimate parameters

CVA conjoint can be estimated for a single individual if necessary. CBC requires a much larger sample in order to estimate parameters, since less information is collected from respondents.

As a result CBC, and its various flavours, do not estimate individual level parameters directly. The simplest form of CBC only provides one set of parameters for the entire sample, since there is insufficient information collected from respondents to estimate at the individual level.

That said, an enormous amount of effort has been put into trying to estimate individual-level parameters, or at least come close to it. These methods must ‘borrow’ information from the aggregate to estimate individual parameters and a numbers of assumptions must be made in doing this (for instance random parameters logit requires assumptions about distribution functions).

 

CBC is often not worth the additional complexity

CBC conjoint (also referred to as discrete choice modelling) is a blanket term that conceals a bewildering array of options, and things can get complicated very quickly. Not only are there numerous models (such as Hierarchical Bayes and Latent Class) and software packages to choose from, there are numerous decisions you must make prior to launch and after the study completes. These include decisions regarding the sample design, survey mode, experimental design, parameter estimation, partial profiles, hybridization and so on. Each of these can take considerable design time, and I haven’t even gotten to issues regarding the simulator setup, which is an entire topic on its own.

So CBC can become enormously involved for the practitioner. Worse, research users are going to have a harder time grasping the end result. Let’s take the parameters as an example. CVA part-worths are easy to explain as simple deviations from the average rating. In contrast, the parameters of CBC are expressed in terms of log odds, which even when exponentiated and expressed as odds ratios is still confusing.

In summary you have to ask yourself, are uncertain gains in internal validity worth the additional cost and complexity? Even in situations where consumers will face choice arrays – such as the supermarket shelf – I’m not sure an adequate case can be made to justify CBC conjoint as the default choice.

 

References

Elrod, T., Louviere, J. J., & Davey, K. S. (1992). An Empirical Comparison of Ratings-Based and Choice-Based Conjoint Models. Journal of Marketing Research, 29(3), 368. doi:10.2307/3172746

Karniouchina, Ekaterina & Moore, William & Rhee, Bo & Verma, Rohit. (2009). Issues in the Use of Ratings-Based Versus Choice-Based Conjoint Analysis in Operations Management Research. European Journal of Operational Research. 197. 340-348. 10.1016/j.ejor.2008.05.029.

Baier, Daniel & Pełka, Marcin & Rybicka, Aneta & Schreiber, Stefanie. (2015). Ratings-/Rankings-Based Versus Choice-Based Conjoint Analysis for Predicting Choices. 10.1007/978-3-662-44983-7_18.

About the author

Craig Kolb is a quantitative marketing-research specialist. Craig has over 17 years experience conducting marketing research studies, with a special emphasis on survey-based measures and analytics. Craig believes surveys are an important, albeit often misused way of understanding human beings, and a valuable sanity check on digital metrics which often fail to deliver in terms of accuracy and insight.

Craig has written numerous papers over the years and has received extensive coverage in the media for his marketing research work.

Craig has a B.Soc.Sci. (Hons) degree and an Ordinary Certificate in Statistics from the RSS.