4 PerspectivesJuly 9, 2009
Advanced Decision Tree Analysis in Litigation: An Interview With Marc Victor, Part II
For advanced decision analysis in litigation, where do we start? Last week we began to take our series on decision trees to the next level with Part I of our interview with Marc Victor of Litigation Risk Analysis, Inc., who pioneered the use of decision trees in dispute resolution and litigation in the 1970s. This post is Part II of that two-part interview, in Q & A format.
Marc, people often say that the “inputs” on a decision tree — the probabilities of various outcomes — are imprecise. One of the the comments to our first post on decision trees put it this way:
The theoretical problem is with the assignment of probabilities and their meaning. Unless you are just goofing around with numbers, the assignment of a probability to an event presupposes that there is a frequency of similar events to count. This is hardly ever true in litigation, unless restricted to something like employment dismissal cases. Even then, I have trouble interpreting the numbers as anything more than subjective probabilities, i.e. just goofing around with numbers.
Is this a fair criticism?
I don’t think so. First, it’s not a question of probabilities being “precise” or “imprecise” — the idea is for them to be “realistic.” Second, assignment of a probability does not presuppose “a frequency of similar events to count.” A probability is simply a reflection of someone’s opinion of the likelihood of success in a particular situation. Lawyers have always given opinions like these, even in one-of-a-kind cases — they’ve simply used phrases (such as “pretty good chance”) much more often than they’ve used numbers. But because it’s very easy to show that the phrases are more ambiguous than the percentages, and because centuries-old probability theory tells us how to combine a 60% chance of success on one issue and a 25% chance of success on another issue to determine the overall chance of success (but doesn’t help us to combine a “pretty good chance” on one issue and a “definite possibility” on another), there are tremendous advantages in using the 0 to 100 scale rather than the “no chance” to “sure thing” scale. I can’t help but think that if the person who wrote the earlier comment heard his or her doctor say “I’m ‘reasonably confident’ you’ll come through this procedure without any complications,” they would immediately ask for clarification: “What do you mean by ‘reasonably confident’? 95%? 80%? 65%?” Why should clients expect less from their attorneys?
Anything else?
Definitely. The commenter suggests there is something wrong with subjective probabilities — that they are nothing more than “goofing around.” I have a very hard time equating the rigorous process of developing a thorough List of Reasons (as discussed in Part I of this interview), and then expressing an unambiguous opinion regarding the chance of winning versus losing an issue, as “goofing around.” Well-reasoned “subjective” probabilities are quite helpful — and are exactly what lawyers (and doctors and senior business executives and others) have always been hired for: “Based on all of your experience and the information you have available to you counselor (or doctor or senior executive), what’s your best guess of my chances?” And if the advisor has good judgment, then both theory and practice have shown that using his or her subjective probabilities to calculate probability-weighted average values (“expected values”), and then using these average values to make decisions, will lead to better results across the decision-maker’s entire portfolio of problems over time. And what’s the alternative? Refuse to give opinions? How is a client (or patient or board of directors) supposed to choose between alternatives (like settle or go to trial, have the surgery or hope for the best, invest millions or don’t invest) if they are given no sense at all of their expert’s opinion of success or failure for each of the risky alternatives?
I once had an actuary tell me that, because the future is uncertain, his numbers were almost certainly wrong, but he believed they were less wrong than guessing outcomes with no analysis. Can the same be said for Decision Tree Analysis?
Definitely! Identifying the underlying uncertainties that will impact your overall results, making reasoned guesses about each of those, and using proven probability theory to combine the pieces has to be better than no analysis at all. I like to say that because every defendant’s decision whether to pay $X or go to trial (or, if plaintiff, take $X rather than go to trial) NECESSARILY involves thinking about the chances of doing better or worse at trial, it’s best to make those guesses as explicit and unambiguous as possible. This allows others to better understand your thought process and how you reached your recommendation, and it allows you to explore how sensitive your decision is to each of your underlying judgment calls. [Editor's Note: more on sensitivity analysis can be found at pages 12-17 to 12-18 in "Evaluating Legal Risks and Costs with Decision Tree Analysis," which is reprinted available on the articles page at litigationrisk.com; it also appears in the ACC's Successful Partnering Between Inside and Outside Counsel.
By now we have all seen decision trees, which help us visualize the various turning points in a case. Are there other ways to graphically represent the results of a decision tree?
It's relatively easy to summarize the results of a decision tree in a graph that orders the range of potential results from low to high and shows the relative likelihood of each. This is illustrated in most of the papers available at litigationrisk.com. In addition, sensitivity analysis graphs -- that show how the expected value of litigating varies as the probability of success on a particular issue is varied -- are another useful analytical result easily derived from the decision tree. Some are illustrated on page 12-18 of "Evaluating Legal Risks and Costs with Decision Tree Analysis" [available on the articles page at litigationrisk.com]. In addition, as clients begin to assess the probability of an event occurring, I often present them with a “probability wheel,” which is shown on page 12-10 of the same article, to help them visualize their chances of winning or losing. This has proven to produce more realistic assessments.
Who are some of the advanced decision tree users out there?
At this point I shouldn’t mention names, but major oil companies, utilities, insurers, financial institutions, and others are repeat users.
Decision Tree Analysis is often associated with defense counsel. Are plaintiffs’ lawyers using it, too?
If you mean plaintiffs’ personal injury lawyers, probably very few — though the benefits of doing so apply equally to both sides. But companies (or even government agencies) considering or involved in litigation will use the techniques to be sure they want to bring the case in light of the often steep costs of litigation, as well as to help plan pretrial and settlement strategies.
Besides lawyers and clients, who else out there is using decision trees in litigation?
We’ve already discussed their use by mediators (in Part I of this interview). Judges in some cases might also need to use decision trees. I’m not sure if you have seen it, but you might want to read Judge Posner’s opinion in the Reynolds case [Reynolds v. Beneficial National Bank, 288 F.3d 277 (7th Cir. 2002)], which I cite in my paper “The Role of Risk Analysis in Dispute and Litigation Management” [available on the articles page at litigationrisk.com]. In his opinion, Judge Posner reversed a proposed class action settlement, using in part the following analysis [at pages 284-285]:
[T]he judge should have made a greater effort (he made none) to quantify the net expected value of continued litigation to the class, since a settlement for less than that value would not be adequate. Determining that value would require estimating the range of possible outcomes and ascribing a probability to each point on the range . . ..
After outlining a hypothetical valuation of a litigation and calculating its net expected value, the court continued:
. . . our point is only that the judge made no effort to translate his intuitions about the strength of the plaintiff’s case, the range of possible damages, and the likely duration of the litigation if it was not settled now into numbers that would permit a responsible evaluation of the reasonableness of the settlement.
And as my coauthors and I wrote in the above-cited article, “If clients and circuit judges now expect risk analyses, judges, mediators, shareholders, and the SEC may not be far behind. Outside counsel had better be ready.”
Is there a criticism of decision trees out there that you feel is unjustified?
Some lawyers will criticize some decision trees as being too complicated. Assuming the tree was correctly done, but is still complicated, then I like to say that the tree is never more complicated than their real problem (i.e., than the underlying dispute that’s being modeled). And I ask whether they think they can do a better job keeping track of all the pieces and combining them to form a sound opinion about case value — (1) by doing it in their head or (2) laying it out explicitly in a decision tree.
Many thanks to Marc B. Victor, Esq. for his time and effort in connection with this interview. And Marc, don’t think it’s the last time we’ll call . . ..
[UPDATE: For for more on Decision Tree Analysis, Settlement Perspectives' series on decision trees includes:
- Decision Tree Analysis in Litigation: The Basics;
- Why Should You Try a Decision Tree in Your Next Dispute?;
- Advanced Decision Tree Analysis in Litigation: An Interview with Marc Victor, Part I;
- Advanced Decision Tree Analysis in Litigation: An Interview With Marc Victor, Part II (this post);
- Decision Trees in Mediation: A Few Examples; and
- Avoiding the Limitations of Decision Trees: A Few Tips from Mediators Who Use Them.]
Categories: Decision Trees,Negotiation
4 Perspectives:
michael webster — Friday, July 10, 2009 11:51 am
Marc writes: “I don’t think so. First, it’s not a question of probabilities being “precise” or “imprecise” — the idea is for them to be “realistic.” Second, assignment of a probability does not presuppose “a frequency of similar events to count.” A probability is simply a reflection of someone’s opinion of the likelihood of success in a particular situation. Lawyers have always given opinions like these, even in one-of-a-kind cases — they’ve simply used phrases (such as “pretty good chance”) much more often than they’ve used numbers.”
I made the original observation, which probably in retrospect needed more background.
Here are two good introductions to the foundational problem of how to interpret a probability measure.
1. Stanford University Encyclopedia article by Alan Hajek, which is hard – http://plato.stanford.edu/entries/probability-interpret/
2. The Wikipedia article, which is easier, but should make reading Hajek more accessible http://en.wikipedia.org/wiki/Probability_interpretations
Marc seems to using some version of bayesian subjective probability, without acknowledging the possible limitations or even incoherence of this project.
Briefly, there are essentially three types of interpretation of probability.
1. Original notion – the probability of A was just the number of times that A showed up in an experiment with mutual exclusive and equally likely events. So, the probability of a coin being a head is just 1/2 because there are only two equally likely outcomes, and being a head is one of them.
2. Frequentists notion – the probability of A is the limit of the frequency of A being seen in a series of large experiments. The probability of being a head is 1/2, not because there are only two equally likely outcomes – landing on an edge is possible -, but because over massive trials and experiments, then frequency of heads approaches the number 1/2.
Clearly, both of these notions of probability are unsuitable for many if not all lawsuits.
That leaves the third intepretation, subjective bayesian.
3. The probability of A is, roughly the degree of belief that you have in A, subject to some initial evidence. Critically, for this school of thought, there be no requirement that people agree about this initial evidence.
” Bayesians would argue that this is right and proper – if the issue is such that reasonable people can put forward different, but plausible, priors and the data are such that the likelihood does not swamp the prior, then the issue is not resolved unambiguously at the present stage of knowledge and Bayesian statistics highlights this fact. They would argue that any approach that purports to produce a single, definitive answer to the question at hand in these circumstances is obscuring the truth.”
This again seems to prevent this concept from being used, although much more needs to be said.
Marc is entirely within his rights to urge people to try decision analysis, and I for one would be entirely happy if more attorney’s required training in decision analysis – but, there are serious problems which cannot be simply ignored.
Thanks for these interviews, they were interesting.
michael webster — Friday, July 10, 2009 8:37 pm
I have reflected on my above comment, and earlier ones, and have come to the conclusion that I have been overly pedantic, something one would expect from a professor, which I used to be.
Let me try a more helpful track.
By way of background, when I was at Stanford, Marc gave a lecture to the Robert Mnookin’s negotiation class, and in Vancouver when Marc, in 2004, addressed the ABA Franchise Forum.
My overall sense is this: most attorneys are comfortable with about 7 ordinal measures of risk: no chance, low chance, not a great chance, roughly equal, good chance, great chance, and a lock. After that they are very uncomfortable with providing book making bets – even though clients want that.
They are not comfortable being in the role of a book maker – that is, going beyond the ordinal data and producing a cardinal measure, a measure needed for decision analysis.
So, although an experience attorney might be able to say that event A is more likely than B, or rather that story A coheres better than story B, I sense that the required information to go beyond this ordinal measure to something like story A coheres better by 18 percent than story B is something most attorneys are not comfortable with.
And for good reason.
Does that mean Marc, and his students, are engaged in something akin to astrology?
No, but it does mean that on a practical level, the move from ordinal measures to full scale cardinal numbers, which can be multiplied and added together, involves imposing some structure on the attorney’s intuitive measure of likelihood or coherence of stories.
The attorney has to be convinced that a) his intuitive ordinal measure has been captured by the decision analysis model and b) is willing to accept the inferences that are captured by the modeling process.
Personally, I would be thrilled if attorneys en masse needed to learn both decision analysis, game theory, and negotiation theory in order to do their jobs. However, after 30 years in decision theory, I have come to a more nuanced view: decision analysis so ably propounded by Howard Raiffa might work in a prescriptive situation if the modeler is sensitive to making sure that the original ordinal measures of risk are preserved in the model, and that that the cardinal numbers, which allow the mathematics to work, can be treated as artifacts.
Well, I hope this has been less pedantic and more useful – because I really do like drawing decision trees to explain and resolve conflict!
John DeGroote — Friday, July 10, 2009 10:42 pm
Michael–
Thanks for advancing the discussion, and I’m happy to hear that you “really do like drawing decision trees to explain and resolve conflict;” I hope more of our readers will be able to say that soon.
You raise an important issue that we all need to be mindful of — it’s true that none of us automatically think of an 18% chance of success on a motion or a 47% chance of excluding a piece of evidence, so we do need to be careful as we convert our subjective impressions to numerical values. But once we have assigned these values, all involved — clients, lawyers and mediators — can learn a great deal from the result. At the end of the day I am confident that there is some degree of inaccuracy in numbers that are derived from the process of converting subjective impressions to objective values. However, like the actuary mentioned in my post above, the final calculation resulting from Decision Tree Analysis is “less wrong” than guessing outcomes with no analysis, and we can all learn quite a bit as the tree is developed.
Michael, thanks again for highlighting this issue — I appreciate your insights.
JD
michael webster — Sunday, July 12, 2009 10:21 am
Marc writes: ” I can’t help but think that if the person who wrote the earlier comment heard his or her doctor say “I’m ‘reasonably confident’ you’ll come through this procedure without any complications,” they would immediately ask for clarification: “What do you mean by ‘reasonably confident’? 95%? 80%? 65%?” Why should clients expect less from their attorneys?”
John writes: ” At the end of the day I am confident that there is some degree of inaccuracy in numbers that are derived from the process of converting subjective impressions to objective values.”
I fear I am not making myself clear here. Just by why of background, my PhD was in rational choice theory, and my external was A.K Sen. So, I do have an excellent academic background for this topic.
Marc and John, in order to get numbers like 70% from “I am fairly confident”, certain axioms on risk measures have to be satisfied. Nobody who does this type of decision analysis ever bothers finding out whether the particular problem is coherent, they simply start asking for numbers.
It doesn’t matter whether the numbers come from a gut intuition, well reasoned from a list of pros and cons, or divine inspiration if the underlying ordinal measures cannot give rise to a cardinal measure. It is just astrology if coherence isn’t tested for at the beginning.
We know that it is very difficult to get coherent cardinal rankings from ordinal preferences, and so it is going to be correspondingly difficult to get coherent cardinal risk measures from ordinal preferences. Not difficult because it is hard to measure, but because you may not be measuring anything. (This is what the ordinary attorney understands at a gut level, in my view.)
And if I asked my Doctor for further clarification, I hope that he would have the common sense to reply:
“Do I look like a book maker to you?”