By Craig Kolb, Acentric Marketing Research (Pty) LTD, 21 June 2019
Conjoint analysis is usually divided into two broad types: Conjoint Value Analysis (CVA) and Choice Based Conjoint (CBC) - sometimes also referred to as Discrete Choice Modelling (DCM).
If any random end-user of conjoint analysis were asked what the difference was between CVA and CBC, they would most likely point out differences in how the data is collected. CVA uses one of either rating, ranking or pairwise forms of data collection (in this article I’m usually referring to the ratings form) while CBC uses choice sets. However they also differ in many other ways at each of the major stages of a conjoint analysis procedure, namely: experimental design construction, the types of statistical models used and the way in which simulators are constructed.
CVA conjoint analysis is the oldest form of conjoint analysis; developed into a commercially useful form by Professor Paul E. Green in the 1970s. Over the years it has gone through various improvements within the academic literature to keep it relevant.
Both CVA and CBC have their own strengths and weaknesses, but perhaps because CBC came later, commentators (particularly vendors like Sawtooth) have focused on selling the ‘new kid on the block’. As a result, less seems to have been written about the advantages of CVA conjoint analysis.
So in this article I hope to address that imbalance by outlining the advantages of CVA; some of them from the literature and some of them from own experience.
The advent of smartphones
One of the advantages, from my personal experience, is that CVA is better suited to smaller screens. CBC conjoint is burdened with the requirement that at least two alternatives (usually more) are shown next to each other on a screen. Fitting an entire 'choice set' on screen – in a legible way - is often impossible. As many respondents now choose to complete surveys on smartphones, rather than desktops or laptops, this isn’t a trivial concern. While respondents could conceivably scroll horizontally to view all of the options, this is inconvenient and risks respondents not seeing all of the options. Ratings-based CVA, with its 'one at a time' approach doesn't have this problem. Each profile usually fits comfortably on screen, and even if scrolling is required, it is normally vertically; a process necessary to find the rating scale and button to 'continue' at the bottom, making it unlikely they will miss any aspect of the profile.
Choice sets are not always more realistic than monadic exposure
It's often been claimed that CBC is somehow more 'realistic'. While the choice sets of CBC might seem ideal in the world of FMCG, there are numerous industries where you are unlikely to have an array of competitors standing right in front of you at the moment of choice. Many real world choice situations are more realistically measured in a monadic way. Decision making in these industries rely more heavily on memory. Examples include online stores, universities, cars, banks, insurance, software and housing.
The original reason CBC began to supplant CVA is not as relevant anymore
One of the original contentions of the developers of CBC conjoint analysis was that asking for choices provided ‘ratio scaled’ data – as opposed to CVA’s ratings, which are interval scaled. This is only partly true, since CVA uses decision rules such as BTL to rescale interval to ratio; however admittedly these decision rules do have an arbitrary quality, at least as originally conceived.
Greene eventually introduced a way of rescaling the results of a CVA analysis more accurately (that for whatever reason isn’t as widely known). His suggestion was likely based off of earlier work by Pessimier. Essentially holdout choice tasks are used to estimate a rescaling parameter that then allows CVA conjoint to more closely emulate choice probabilities. The parameter allows a better fit than the ‘one size fits all’ approach of the decision rules.
While software developers may give the impression that CBC is more accurate, academic research is far less clear; both in terms of parameter estimates and in terms of predictive validity.
In terms of parameters, Karniouchina et al. (2008) found only slight differences between parameter estimates – implying similar estimates of attribute importance. Indeed Karniouchina et al. (2008) concluded “This study, along with the other articles in this research stream, strongly suggests that in traditional conjoint tasks, the parameter estimates produced by RB and CB conjoint models are likely to be quite similar.”
In terms of predictive validity, a study by Elrod et. al (1992) of rental apartments demonstrated that CVA and CBC conjoint produced similar results, “both approaches predict holdout shares well…”.
Karniouchina et al. (2008) in a study of laptops found that ‘hit rates’ (percentage of times the hold-out choice matches predicted choice) at the individual level were better for a more complex form of CBC called hierarchical bayes (HB CBC); but no significant difference was found with segment level hit rates or aggregate level hit rates. In terms of predicted share of choice “no significant differences between the individual- or segment-level RB [i.e. CVA conjoint] and CB [i.e. CBC conjoint] models ” were found, while there was a significant difference at the aggregate level. A peculiarity of this study should be pointed out, in that HB was used for the individual level parameter estimates for CVA.
While hardly an exhaustive examination of academic studies on this topic, these few studies should make it clear that there is nothing near a consensus on CBC being superior in predicting hold outs (preference share at the aggregate level). If the aim of your study is to predict market share, rather than preference share, then any differences in performance are even less relevant. When you consider that conjoint analysis is an incomplete model, in the sense that it ignores the promotional and accessibility aspects of the marketing mix and has modest external validity (without calibrating for these missing variables) it is doubtful any slight differences between CVA and CBC in terms of hold-out predictions matters much in practice.
CVA cognitive load is lower than CBC
CVA requires less effort on the respondent’s part, since respondents only need to evaluate one profile at a time. In contrast, CBC conjoint requires respondents to examine multiple profiles in each choice set before making a decision. Given how large these can become – I have seen some CBC choice sets run as large as eight profiles abreast – there is no doubt a large cognitive load on respondents, for very little in exchange in terms of information provided back (i.e. a single choice).
In total there are more profiles for a given number of attributes and levels in CBC conjoint analysis. So assuming CBC respondents pay as much attention to each profile, there load is far greater for the entire exercise. Of course, some simply don’t pay attention to all the profiles and provide poorer quality data in return.
Figure 1: CBC vs CVA layout
CVA conjoint doesn’t require large samples in order to estimate parameters
CVA conjoint can be estimated for a single individual if necessary. CBC requires a much larger sample in order to estimate parameters, since less information is collected from respondents.
As a result CBC, and its various flavours, do not estimate individual level parameters directly. The simplest form of CBC only provides one set of parameters for the entire sample, since there is insufficient information collected from respondents to estimate at the individual level.
That said, an enormous amount of effort has been put into trying to estimate individual-level parameters, or at least come close to it. These methods must ‘borrow’ information from the aggregate to estimate individual parameters and a numbers of assumptions must be made in doing this (for instance random parameters logit requires assumptions about distribution functions).
CBC is often complex
CBC conjoint (also referred to as discrete choice modelling) is a blanket term that conceals a bewildering array of options, and things can get complicated very quickly. Not only are there numerous models (such as Hierarchical Bayes and Latent Class) and software packages to choose from, there are numerous decisions you must make prior to launch and after the study completes. These include decisions regarding the sample design, survey mode, experimental design, parameter estimation, partial profiles, hybridization and so on. Each of these can take considerable design time, and I haven’t even gotten to issues regarding the simulator setup, which is an entire topic on its own.
So CBC can become enormously involved for the practitioner. Worse, research users are going to have a harder time grasping the end result. Let’s take the parameters as an example. CVA part-worths are easy to explain as simple deviations from the average rating. In contrast, the parameters of CBC are expressed in terms of log odds, which even when exponentiated and expressed as odds ratios is still confusing.
In summary, the uncertain gains in internal validity are not worth the additional cost and complexity. Even in situations where consumers will face choice arrays – such as the supermarket shelf – I’m not sure an adequate case can be made to justify CBC conjoint as the default choice.
Ready to launch your own CVA conjoint analysis survey?
Read more here about Acentric's CVA conjoint offering.
Elrod, T., Louviere, J. J., & Davey, K. S. (1992). An Empirical Comparison of Ratings-Based and Choice-Based Conjoint Models. Journal of Marketing Research, 29(3), 368. doi:10.2307/3172746
Karniouchina, Ekaterina & Moore, William & Rhee, Bo & Verma, Rohit. (2009). Issues in the Use of Ratings-Based Versus Choice-Based Conjoint Analysis in Operations Management Research. European Journal of Operational Research. 197. 340-348. 10.1016/j.ejor.2008.05.029.
Baier, Daniel & Pełka, Marcin & Rybicka, Aneta & Schreiber, Stefanie. (2015). Ratings-/Rankings-Based Versus Choice-Based Conjoint Analysis for Predicting Choices. 10.1007/978-3-662-44983-7_18.
About the author
Craig Kolb is a quantitative marketing-research specialist. Craig has over 17 years experience conducting marketing research studies, with a special emphasis on survey-based measures and analytics. Craig believes surveys are an important, albeit often misused way of understanding human beings, and a valuable sanity check on digital metrics which often fail to deliver in terms of accuracy and insight.
Craig has written numerous papers over the years and has received extensive coverage in the media for his marketing research work.
Craig has a B.Soc.Sci. (Hons) degree and an Ordinary Certificate in Statistics from the RSS.