Which Error Would You Like, Sir?

Devesh Rajadhyax
23 min readMay 18, 2021

--

How Decision Error Analysis helps to optimise Automated Decision Making

“The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.”

― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values

Photo by Javier Allegue Barros on Unsplash

Introduction

Digital Transformation is the mantra of the day. Companies all over the world are engaged in digitising and automating their processes. Not all companies are at the same level of achievement, however. The ones that have raced ahead are looking at Automated Decision Making (ADM) as the next step. Apart from improving profitability, efficiency and customer happiness, ADM is expected to raise inclusivity and fairness in the enterprise.

The growing interest in decision automation has resulted in a renewed focus on the field of Decision Science. Google appointed a new Chief Decision Officer Cassie Kozyrkov in 2017 to figure out how machines and humans will take decisions together. Kozyrkov has written a number of influential articles on this subject. Researchers like Lorien Pratt are evangelising the budding field of Decision Intelligence that combines Decision Science with many other fields like psychology, economics and artificial intelligence [1].

Automation without giving proper thought to its consequences will be premature and risky. Various consequences such as the socio-economical and ethical ones need to be studied. In this article, I want to handle one important aspect of decision automation. We will be closely investigating the consequences of decision errors.

Decision making under uncertainty is prone to errors. Decision Science has treated such errors in depth. While studying Decision Science to learn about decision errors, I realised that it turns frequently to a particular field- a field that has a long and rich experience of dealing with errors and their consequences. This is the ancient field of criminal justice. I found the considerations of decision errors in criminal justice really interesting, so I decided to include them in this article as a case study.

The criminal justice system of a society is responsible for solving crimes and bringing the perpetrators to justice. As you can imagine, the errors in decisions in justice have large implications for the society. The great judicial minds have thought about this since centuries. The result of this thought are many legal principles. We will occupy ourselves especially with two of them. In many countries, especially those who follow the British legal traditions, the judicial system is guided by these two principles — the Presumption of Innocence and the Blackstone Ratio.

Presumption of Innocence — everyone is innocent until proven guilty.

Blackstone Ratio — It is better that ten guilty persons escape than that one innocent suffers.

Both the principles have been a part of legal thought since at least two thousand years. The Blackstone Ratio is named after the eminent English jurist William Blackstone, who mentioned it in 1760 in his book ‘Commentaries on Laws of England’, but the idea has ancient roots going back to the Greek philosophers [2].

These principles have had a major impact on the actual working of the criminal justice system. Many scholars, jurists and practitioners of law have challenged the principles, especially Blackstone’s Ratio for years [3]. They have argued that the society pays a high cost by adhering to this principle[4].

What can we learn from the thoughts of an eighteenth century jurist that will be helpful in designing systems that automate decisions?

Declaring someone as guilty (or not-guilty) of a crime is a decision. When an innocent is punished or when a guilty is acquitted, it is an error in the decision. An error has a cost and the thinkers in the field have to carefully consider the consequences of such errors. This is what we can learn from them — how to think about the consequences of decision errors.

In this article we will learn about decision error analysis and how it can be used in automating business decisions. As examples, apart from the criminal justice system we just spoke about, we will analyse another common and probably even older decision process — the clinical diagnosis. We will draw parallels of these examples with some common business decisions and see how the learnings can be applied. Decision error analysis can also be applied to personal decisions such as whom to marry, but that will be left as an exercise to the reader.

The YN Decision

Decision making has many rigorous definitions. We will just choose a very simple one and go ahead:

‘Decision making is choosing between alternatives’.

There are many different types of decisions, but we are specifically interested in one particular type — the decision whose outcome is either Yes or No. We will call this the Yes/No Decision (YND).

Let’s take two examples from two different ages that we will be referring throughout this article:

(Pre-historic man): Is there a tiger behind this bush? Y/N(Modern risk analyst): Is this credit card transaction a fraud? Y/N

It is surprising how many decisions can be expressed in terms of YND. For example, the clinical diagnosis decision can be broken up as follows:

(Doctor): What is this patient suffering from, based on history, symptoms and investigations?

This is not YND. The outcome will be the name of a disease. But we can express it as a set of YNDs using an algorithm:

1. Identify possible diseases based on available data2. For each of the possible disease A:
a. Is the patient suffering from A? Y/N
b. If the outcome of a. is Y, do not process further. The patient has disease A.

‘Is the patient suffering from A’ is a YND, repeated over and over to get the answer to the ‘what’ decision. This is one example how a decision of different type can be converted to a YND.

Note: I am not a clinician, just as I am not a jurist. I do not claim that doctors actually use the above algorithm, though I think it is quite probable. We are using the example only for illustration.

Decision Errors in YND

As I said, all decisions are prone to errors. Different types of decisions give rise to different kinds of errors. For example, only two types of errors can occur in YND:

False Positives (FP): The actual answer was No, but you decided YesFalse Negative(FN): The actual answer was Yes, but you decided No

It is easy to extend the YND errors to What/Which/Whom kind of decisions. For illustration let’s analyse the clinical diagnosis decision. The method is based on a YND ‘Is Patient Suffering From A’, which is subject to false positives and false negatives.

False positive in ‘Is Patient Suffering From A’: For a disease A, the patient doesn’t have it, but the doctor decides Yes. Thus the patient is diagnosed with A when they don’t have it. But there are two further possibilities:

  1. The patient is not suffering from any disease, but A is detected
  2. The patient is suffering from B, but A is detected.

False negative in ‘Is Patient Suffering From A’: For a disease A, the patient has it, but the doctor decides No. So the patient is declared as normal.

Taken together then, there are three possible errors in clinical diagnosis:

  1. The patient is normal, but is diagnosed with A
  2. The patient has B, but is diagnosed with A
  3. The patient has A, but is declared normal

Error Detection and Ground Truth

In the previous section, I referred to ‘actual outcome’. We can tag a decision as erroneous only if we know the actual outcome. This is called the ‘ground truth’ in decision analysis. So for example,

Decision outcome: YesGround truth: NoError: False positive

Ground truth need not exist or be known. In fact, in many cases the ground truth is either non-existent, is known at a later time or is unknowable. In our clinical diagnosis example, the actual disease will be known after many round of medications, investigations and relief to the patient. In some cases, the actual ailment remains a mystery. See also the case of loan approval decision. If the loan is approved and the borrower defaults, it will be a false positive error. But if the loan is not approved, it will never be known whether it was the right decision or a false negative. Such is the nature of a No decision.

Error Preference

That brings us to a somewhat weird question:

What type of error do you prefer in your decision — FP or FN?

If you are a normal business person, your answer is clear — No Thanks! Imagine your data scientists coming and asking you whether you prefer false positives or false negatives. You will emphatically make it known that no error in the decision is acceptable.

Unfortunately, it’s not up to you. Decisions under uncertainty are riddled with inaccuracies. To follow the diagnosis example, there are so many uncertainties. The data that the doctor gets is vague, the correlation between symptoms and diseases is not strong, investigations do not return clear answers. The doctor has to make the decision despite of all the uncertainties. Most real life decisions are of such nature, and so they are prone to errors.

In many cases however, it is possible to choose one error over another. In YND, it is possible to reduce FP if more FN is acceptable and vice versa. This relative acceptability of one type of error over another will be called Error Preference (EP) here. Error Preference depends on what kind of decision you are making. Before we go ahead with more investigation of Error Preference, let’s see an interesting and innate EP that has been built into all living creatures by evolution [5].

In day to day life, many time we have to decide between the presence or absence of something. You will readily recognise this as a YND. Recall the Tiger decision facing the prehistoric human. Another example — you trying to decide the future presence or absence of rain, so that you can decide whether to take an umbrella with you. It is obvious that in these decisions an FP is preferable over an FN. Let’s see the costs of each in the Tiger decision:

FN cost: LifeFP cost: Little embarrassment

Evolution favours life over embarrassment and thus we all have an ingrained EP for false positives.

In some decisions, it is better to err on the side of caution. In loan approval decision, you would prefer FN over FP. Check for yourself.

A very good discussion on Error Preference is given in this article.

Cost of Error

In the last section I referred to the cost of errors in the Tiger decision. It is easy to see that the Error Preference is linked to the cost of errors. In YND, false positives and false negatives can have different costs. We will now see another example, the Credit Card Fraud decision.

Decision: Is this credit card transaction a fraud?FN: It’s a fraud, but tagged normalCost of FN: the value of the transaction + all the troubleFP: It is not a fraud, but is tagged as one. A representative calls the customer. The customer days all is well.Cost of FP: A phone call

Optimisation of a Decision

Putting together learnings from previous sections:

  • In any decision, there will be errors. In specific, in YND there are FP and FN errors.
  • There is certain cost associated with each kind of error.
  • The proportion of FP and FN is adjustable.

Taken together, the above points mean that we can reach a point of lowest cost for a decision.

Consider another measure of optimisation, the accuracy.

If we take the decision N times, we will get some FP and FN and the others will be correct decisions.

Let:

N — number of decisionsNFP — number of false positivesNFN — number of false negativesNCD — number of correct decisionsIt is obvious thatN = NCD + NFP + NFNThus:Accuracy = NCD / N orAccuracy = ( N — (NFP + NFN)) / N

We can adjust NFP and NFN such that the accuracy is the highest, which is another way to optimise our decision.

We would like to choose NFP and NFN in such a way that the cost and accuracy are both reasonably optimised.

As you must have noticed, the entire optimisation process depends on our ability to adjust the decision process to produce a certain number of false positives and false negatives. A good way to imagine this ability is to assume a variable ‘p’ involved in the decision process such that,

NFP = f1(p)NFN = f2(p)

Which simply means that by varying p, we can control the number of errors. We will call such a variable as Decision Control Variable (DC variable or simply DC).

Of course, I haven’t said that a decision process must have such a DC variable or even if it is there, we will get to control it. But most real life decisions have clear control variables that affect the number of FP and FN.

We will illustrate this with an example.

Decision Method and Control Variable — Example

To illustrate, we will choose a decision that you are acutely familiar with through all the years of appearing for exams.

Decision — Is the student knowledgeable? Y/NMethod — Score and Threshold.

The method, which we will call ‘Score and Threshold’ is a simple and well known one. You ask a number of questions and allocate a score for each answer. Then you add all the scores to get a total. A threshold is applied to this total. If the total score is S and threshold (called Passing Percentage in the academic world) is T, then

If S >= T, outcome = YIf S < T, outcome = N

Remember that the decision is to check whether the student is knowledgeable. Thus the ground truth is the actual knowledge of the student, and the errors:

False positive: outcome is Y, but student is not knowledgeableFalse negative: outcome is N, but student is knowledgeable

We can express it as a table

Fig 1: Table showing Ground Truth and Decision Outcome

Note: As I said earlier, ground truth in this case is hard to determine. One can assess the ground truth after some interactions or while she works on a related job. In our simplified example we have considered it to be known.

We now proceed to show that a DC exists in this case.

The obvious candidate for a Decision Control variable is the threshold T. Intuitively, if we raise T, there will be fewer of non-knowledgeable students who will be declared as knowledgeable, which means that FPs are lesser. However, there will be some students who are knowledgeable, but could not exceed the high threshold. So there will be more FNs. The reverse will happen if we lower T.

We will now give a formal setting to this intuition. We are going to use a number of variables, so I am using hard numbers for all ground truth values to make the calculations little easier on eye. It is easy to replace them by variables.

N — Total students = 100KY — number of knowledgeable students (actual) = 75KN — number of non-knowledgeable students (actual) = 25S — score of a studentSmin — minimum score = 0Smax — maximum score = 100TG1 — the score below which there is no knowledgeable student = 35TG2 — the score above which there is a no non-knowledgeable student = 65Assumptions:1. 90% of knowledgeable students are above TG22. 90% of non-knowledgeable students are below TG13. The distribution of scores in all bands is uniform

With the above values and assumptions, we vary the threshold T in the range 15–85 and get the following table:

(I have not mentioned that calculations here, but there are pretty straightforward.)

Fig 2: Variation in FP and FN with change in T

We can already make some observations:

1. As expected, as T increases, FP decreases and FN increases

2. There is no T for which both FP and FN are zero.

Our candidate T has emerged as a Decision Control variable for this decision. You can change it to control the number of FP and FN. Now the task is to optimise it. To that, we need to further assume some cost for FP and FN:

  • Let the cost of FP be 10
  • The cost of FN is assumed to be double of FP. After all, to fail a deserving candidate is more harmful. So Cost of FN = 20.

We add total cost and accuracy to the above table:

Fig 4: Variation in Total Cost and Accuracy with T

A few more observations:

  • In terms of cost, 40 is the best threshold. It also has a rank-2 accuracy of 96.50%
  • Highest accuracy is achieved at T = 65 (97.50%), but the cost is slightly higher.
  • Higher thresholds have very high cost and atrocious accuracy

So either of 40 or 65 can make good threshold in our entirely imaginary example. However, the passing marks for our engineering exams used to be 40% (we didn’t have CGPA then), so I think there is some element of truth in these assumptions.

(Another interesting threshold is T = 35, where there are no FN, means no deserving student is denied. Coincidently, this is the threshold of many 10+2 level board examinations. As you can see, you will pay almost 33% more cost to keep this threshold).

This was a simple example and simple method of decision. But the essence should be kept in mind. In any decision method there will be one or more DCs. The costs and accuracy can be examined by varying the DC. What value of the DC we use depends on our objective — do you want to minimise the cost or maximise accuracy? Guiding principles such as Blackstone Ratio help to set the objective.

The CJ system and Blackstone Ratio

Armed with the above decision error analysis method, we are now ready to apply it to the criminal justice system and the Blackstone Ratio. As always, let’s first convert the CJ process into a YND.

1. For the crime committed (Cr)
- Find suspects {S}. Here {} mean that it is a set.
2. For each suspect Si, decide:
- Is Alleged Perpetrator(Si, Cr)? Y/N
- If yes, call him/her as Accused (Acc)
- Stop and proceed to trial
3. Conduct the criminal trial process (CTP)
- Is Perpetrator(Acc, Cr)? Y/N
- If Yes, Acc is convicted
- If No, Acc is acquitted
4. Sentence the convict, if applicable

This is admittedly a very simplified version of an extremely complex process, but I believe that it captures the core elements. In this process there are two YNDs:

1) Is Alleged Perpetrator (Si, Cr) where Si is a suspect and Cr is the crime — this decision is taken by the police department. They are responsible for identifying the suspect and charging one (or more, but we ignore that here) with the crime. Note that this step is a conversion of Which/Whom decision to YND.

2) Is Perpetrator (Acc, Cr) where Acc is the accused — this decision is made the judge or jury.

Note that in the first decision, none of the YND may return Y (one reason can be a lack of evidence), in which case the police has to identify more suspects.

While Blackstone Ratio and Presumption of Innocence affect both the YNDs, we will discuss only the second one here — the Criminal Trial Process (CTP) to decide Is Perpetrator (Acc, Cr).

Slotting Decisions

The CTP decision shares some interesting characteristics with the clinical diagnosis decision. They are both what can be called as ‘slotting decisions’.

1. There is a slot that has to be filled. If the patient is sick, there must be a disease and if there is a crime, there must be a perpetrator. The ground truth exists in both cases, but may not be knowable.

2. It is possible that there is no slot. The patient may be faking and on closer examination there may be no crime (accident instead of murder). In this case the decision disappears.

3. False positive means wrong slotting and false negative means an empty slot. So in case of CTP, FP means an innocent getting convicted and FN means guilty getting acquitted. In case of diagnosis, FP means patient getting wrong treatment and FN means a sick patient getting no treatment. This has a bearing on costs that we will discuss later.

4. In slotting decision, FP includes FN. If an innocent is getting punished, then the guilty is still at large. If the wrong disease is being treated that means the actual disease is left untreated.

The Choice Between FP and FN

Blackstone Ratio is nothing but an Error Preference. In criminal justice, to punish an innocent is a false positive error, whereas to acquit a guilty is a false negative [6]. So Blackstone Ratio prefers FN over FP with a ratio of 10:1, or

FN/FP = 10

At the risk of repeating an earlier section, why should there be a choice to make? Why can’t we have all guilty punished AND no innocent convicted?

As we saw in our simple example, there is a DC that controls FP, FN, cost and accuracy. We can change these values, but can’t get both FP and FN to zero.

(Strictly speaking, this is not mathematically proved. This is only a thumb rule that applied to most real world problems because of the distribution of random variables.)

What is the DC in CTP? Once we understand that, we will also understand Blackstone Ratio and its importance in criminal justice process.

Decision Control in CTP

The CTP is of course not a simple ‘Score and Threshold’ method. One of the limitation of Score and Threshold is that if scores for all the questions are not available, it is not possible to make the decision. It also does not allow for what is called as a ‘decision network’, where one decision becomes an input for another decision.

The method that fits best with the CTP is the Bayesian method of inference. In a short and simplistic way, Bayesian inference can be explained as follows:

Let’s say Y and N are two possible outcomes of your decision. In Bayesian Inference (BI) they are called ‘hypotheses’. BI assigns them some probabilities P(Y) and P(N) such that

P(Y) + P(N) = 1

P(Y) and P(N) can have any values to begin with, say

P(Y) = 0.5, P(N) = 0.5

Which means they are both equally probable. The values of P indicate your belief in Y or N, which is right now at 50% for both. These starting beliefs are called priors.

Then we conduct certain experiments relevant to the decision and observe their outcome. The outcome is also called ‘evidence’. Let’s say E1 and E2 are two such outcomes. Both E1 and E2 can have values True or False.

Each outcome affects our beliefs in some way.

  • If E1 = True, our belief in Y (P(Y)) may go up. Accordingly, P(N) will go down.
  • If E2 = True P(Y) may go down and P(N) may go up

(The factors that decide the impact of E1 on P are learned from past data, therefore BI is a machine learning method).

Thus after the first experiment, value of E1 is noted and P(Y) is adjusted. BI gives the formula by which it can be done. The value of P(Y) after the adjustment is called posterior belief. This is used as prior for the next experiment.

As we conduct more and more experiments, our beliefs in Y and N keep on updating. When we are done with the experiments, the final values are noted.

Say

P(Y) = 0.95 and P(N) = 0.05

This means that our belief in Y is 95%, enough to declare it as the outcome of our decision. If we find P(Y) = P(N) = 0.5, then we have failed to come to any decision.

Bayesian Network for CTP

How can we apply Bayesian Inference to the trial proceedings? Let’s write a new algorithm for the ‘Is Perpetrator’ YND.

Remember the Presumption of Innocence. In the beginning, you have to assume that the accused is not guilty. The burden of proof is with the prosecution, which means that they have to produce evidence that improves belief in the accused being guilty. We can use this understanding of the process in our algorithm

1. Start with: Is perpetrator(Acc, Cr) = N
2. Introduce a new decision: Is Evidence Against Accused? Y/N
3. P(Y) = P(N) = 0.5
4. For each witness:
a. Ask questions to the witness
b. Score the answers. If the answer is against the accused, the score is higher
c. At the end of examination, total the score
d. Use a threshold. If the score is above this threshold, the outcome of this experiment is True, otherwise False
e. Update the values of P(Y) and P(N)

Every witness examination is an experiment for our BI. But how do we decide whether the experiment returns True or False? We use the Score and Threshold method for that. Pay careful attention to the threshold. What is it and how is it decided?

The threshold signifies a virtual line that the presented evidence has to cross. Only if the evidence is enough, the experiment (witness) outcome will be in favour of the accused being guilty[7].

This is the best place to see how the guiding principle affects decisions. If we set this threshold high, false positives will be eliminated, as we have seen in our previous example. A principle such as Blackstone Ratio does exactly this, it guides the judge or jury to use a high threshold.

But our algorithm for CTP is not yet complete. So continuing:

5. At the end, consider P(Y) and P(N). Only if the value of P(Y) is higher than a threshold, declare ‘Is Evidence Against Accused’ = Y6. If Is Evidence Against Accused = Y, change ‘Is Perpetrator’ (Acc, Cr) to Y

One more threshold is introduced here. BI has given us certain belief in the hypothesis Y. That belief will be a value between 0 and 1. At what value do we decide that the outcome of the ‘Is Evidence Against Accused’ decision is Y? In other words, how much belief is enough to proclaim the accused guilty? Same discussion about Blackstone Ratio applies. We set the threshold higher, so as to eliminate false positives.

Thus the CTP uses two Decision Control variables:

TW — The threshold used in the Score and Threshold of each witness examination

TJ — The threshold used for converting belief to outcome in final judgement

As we change the variables, the number of FP and FN will change. This will have implications on the costs that the society pays. We will now turn our discussion to costs of errors in criminal justice.

Cost Analysis in Criminal Justice Process

As I said before, the Blackstone Ratio has been discussed in details by experts in legal and other fields. The analysis of costs of false positives and false negatives can be found in texts such as [8] and [9]. As our main objective is to learn about decision error analysis, I will keep the discussion at the basic level. Also we will use the framework of ‘slotting decision’ so that it is possible to generalise the learnings.

  • In slotting decisions, it is necessary to establish first that there is a slot. In this case, is there a crime to solve? For this reason, the laws of many countries do not admit a murder charge till the body is found.
  • Once the slot is confirmed, there has to be a match. We have already seen the process through which this match is found.

(Note: In reality, there can be more than one perpetrator. We restrict it to one for the sake of simplicity. An advanced model will have to allow multiple perpetrators.)

  • Cost of FN: We acquitted the actual perpetrator. However, the slot is empty. So the law and order machinery remains in action to fill the slot.
  • There are two types of crimes — crime of passion and crime of profit. In case of crime of passion, the crime is rarely repeated. But in case of crime of profit, the perpetrator of the original crime will most probably commit another crime. For the current discussion, we will ignore this distinction.
  • We assign a probability to whether the perpetrator will be caught before they commit another crime. Let’s say this is 50%.
  • If CC is the cost of one crime, then the cost of false negative then becomes 0.5CC, the cost of the second crime, multiplied by its probability.
  • Cost of FP: In this case and innocent has been punished. The society thus has already committed a crime against the innocent person. Again for simplicity, assume that the cost of this crime is the same as the actual crime — CC[10].
  • When an innocent is punished, the slot is filled. Thus the law and order machinery is not looking for the perpetrator, who remains at large. Subject to the nature of the crime (passion or profit) he will commit another crime before being apprehended for the new crime.
  • Thus the cost of false negative is:
1 CC — the crime against the innocent person + 1 CC — for the new crime committed by guilty at large
  • So the cost of an innocent punished = 2 CC = 4 x cost of a guilty acquitted

Knowing the cost of errors will enable us to optimise the decision. Even from this simple analysis we can see that FP is costlier than FN, so the Error Preference of jurists like Mr. Blackstone is justified.

Summarising the Criminal Justice Process and Learnings

For the case study of criminal justice process, we went through the following steps:

1. We expressed the decision as an algorithm that uses YND.

2. We used some method for the decision, like Score and Threshold or Bayesian Inference.

3. Based on the method, we could identify our Decision Control variables.

4. We then conducted an analysis of the costs of FP and FN

5. We understood why the DCs are given certain values based on the principles such as the Presumption of Innocence and Blackstone’s Ratio.

We have also seen that the CJ process is part of a general class of decisions called ‘slotting decisions’. Many problems fall in this class. Decisions such as hiring a candidate are also variations of the slotting decision.

Applying Decision Error Analysis

Error analysis is a fundamental part of your ADM initiatives. We saw the major steps in decision error analysis in our case study above. You should be able to apply them in the decision automation projects you are managing.

You will find that many decisions will be YNDs, so that error analysis can be applied easily. Approval is the classic type of decision used in so many problem statements. You can recognise the Approval YND in decisions such as Loan Approval, Insurance Claims Approval, Student Admission, Startup Funding and so on. The pattern of error analysis will be similar in all these cases, but the details, such as costs, EP and DCs will vary substantially.

ADM is expected to improve inclusivity and fairness, so a number of use cases will of the ‘Selection’ type. Selection decisions are everywhere, such as candidate selection for hiring or vendor selection for purchase. They are a variation of slotting decisions, where there is a slot, but the match may not be unique. The initial process is similar to criminal justice, with short listing of candidates, but the second decision is taken for all short listed candidates. There can be a third ‘who is the best’ decision at the end. The costs of error and EP will vary depending on the importance of the slot. The Selection decision throws many interesting possibilities for error analysis and it will take another article to treat it properly.

Then there are diagnostics decisions when you are trying to find out what’s wrong with something. The field of IT security involves many such decisions — finding virus, malware etc. — which are similar to our clinical diagnosis example. Another closely related decision type is ‘Anomaly Detection’. These decisions resemble the credit card fraud detection example that we visited.

Many more use cases from marketing, finance, manufacturing, logistics, training etc. can be identified for decision error analysis [11], but I hope I have given enough examples to motivate you to apply it in your own work.

Summary

Decision Intelligence and Automated Decision Making are fast becoming the upcoming trend in Digital Transformation initiatives. It is critical to analyse the consequences of errors in the decision before the automation is put into practice. Decision error analysis deals with the consequences of decision errors. For a given problem statement, you should have a guiding principle that sets the Error Preference in the decision. Based on the EP, it is possible to optimise the costs and set the Decision Control variables to their optimal values.

References

[1] Lorien Pratt, Decision Intelligence has “Left the Lab”: Lessons Learned from 10 Years of Evangelism [2019], Retrieved from https://www.lorienpratt.com/decision-intelligence-has-left-the-lab-lessons-learned-from-10-years-of-evangelism/

[2] Tarun Jain Let Hundred Guilty Be Acquitted But One Innocent Should Not Be Convicted’: Tracing the Origin and the Implications of the Maxim [2008], PRESUMPTIONS: DOCTRINES & APPLICATIONS, ICFAI University Publications, 2008

[3] F. Alhoff, Wrongful Convictions, Wrongful Acquittals, and Blackstone’s Ratio [2018], AJLP

[4] L. Laudan, Costs of Error: Or, Is Proof Beyond a Reasonable Doubt Doing More Harm than Good? (2011), in OXFORD STUDIES IN PHILOSOPHY OF LAW 195 (Leslie Green & Brian Leiter eds., 2011)

[5] Max M. Houck, Tigers, black swans, and unicorns: The need for feedback and oversight [2019], Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7219184/

[6] F. M. Dekking, A modern introduction to probability and statistics : understanding why and how (1946), Pg.378, Springer

[7] E. Lillquist, Recasting Reasonable Doubt: Decision Theory and the Virtues of Variability(2002), 36 U.C. DAVIS L. REV. 85, 149

[8] D. Epps, The Consequences of Errors in Criminal Justice (2015), in Harvard Law Review Vol. 128, №4 (FEBRUARY 2015), pp. 1065–1151

[9] S. Bushway, Estimating empirical Blackstone ratios in two settings: Murder cases and hiring (2011), Albany Law Review

[10] R. Allen & L. Lauden, Deadly Dilemmas (2008), U of Texas Law, Public Law Research Paper №141

[11] T.H. Davenport and J.G. Harris, Automated Decision Making in Consumer Lending (2204), research note, Accenture Institute for High Performance Business, New York, June 2004

--

--

Devesh Rajadhyax

Author of 'Decoding GPT', AI startup founder, self-taught in machine learning