This video provides an in-depth analysis of the research achievements of the three 2021 Nobel Prize in Economics laureates (Professors David Card, Joshua Angrist, and Guido Imbens), explaining the innovations they brought to the field of causal inference. In particular, it emphasizes the importance of natural experiments and the use of instrumental variables for establishing reliable causal relationships through research designs that mimic randomized controlled trials (RCTs) even in situations where real-world experimentation is impossible, and highlights their contribution to the credibility revolution in empirical research by integrating these approaches within the potential outcomes framework.


1. Introduction to the 2021 Nobel Economics Laureates and the Significance of Their Contributions

In October 2021, Professor David Card of UC Berkeley, Professor Joshua Angrist of MIT, and Professor Guido Imbens of Stanford University received the Nobel Prize in Economics for their contributions to causal inference. Their award was considered an exceptional case recognizing the contributions of empirical research based on data analysis, given the Nobel committee's traditional emphasis on theoretical contributions. Noting that just two years earlier, in 2019, experimental research on poverty alleviation had also won the Nobel in Economics, the lecturer emphasizes that the importance of research solving real-world problems through data analysis, not just theoretical work, is growing steadily.

While the media reported on the award at the time, the lecturer expresses regret that the laureates' specific contributions were not explored in depth, and aims to thoroughly examine their research in line with the channel's theme of causal inference. The lecture content is primarily based on the 46-page Nobel committee report explaining the reasons for the selection, with the content explained more accessibly through research examples.

The reason for their selection can be summarized in one phrase: "Contributions to the design-based approach for inferring causal relationships." Specifically, they were recognized for systematically establishing research designs that mimic randomized controlled trials (RCTs) in situations where real-world experimentation is impossible, thereby contributing to causal inference research.


2. The Importance of Natural Experiments and the Credibility Revolution

2.1. The Concept and Application of Natural Experiments

The first key concept is the natural experiment. The most effective method for inferring causal relationships is the randomized experiment, but in reality, such experiments are impossible far more often. Natural experiments provide an alternative approach: by utilizing naturally occurring events or situations to create conditions similar to a randomized experiment without actually conducting one, researchers can analyze and infer causal relationships. This is also called a quasi-experiment.

While these three professors did not originate the natural experiment concept, understanding the background of how natural experiments gained prominence is necessary for properly appreciating their contributions.

2.2. Background and Necessity of the Credibility Revolution

This leads to the second key concept: the Credibility Revolution. This term originated from a paper written by Nobel laureate Professor Joshua Angrist. This paper can be seen as a response to "Let's Take the Con Out of Econometrics," published in 1983 in the American Economic Review, the most prestigious economics journal. The 1983 paper proposed eliminating the weaknesses -- the "con" -- of econometrics research, and many economists agreed with this criticism. In particular, Professor Edward Leamer of UCLA criticized the practice of fitting multiple statistical models to data and selecting the best one -- so-called "model hacking" -- because the analytical results were highly sensitive and made it difficult to reliably analyze causal relationships.

In response, Professor Angrist argued, roughly 30 years later, that such criticism was no longer valid, and that a Credibility Revolution had been achieved in causal inference empirical research through research design.

2.3. Examples Illustrating the Difficulty of Causal Inference

To help understand how difficult causal inference in data analysis is and why overcoming it deserves to be called a "revolution," several examples are presented.

2.3.1. Corporate Diversification and Firm Value

The first example concerns the effect of business diversification on firm value. From the 1980s, numerous studies showed through data analysis that diversification and firm value had a negative relationship -- that is, diversification might lower firm value. However, subsequent research pointed out that these results were merely correlations arising from failure to properly account for endogenous factors and selection bias affecting diversification. When causal inference was performed accounting for selection bias, the opposite result emerged -- diversification actually increased firm value. This example shows how simplistic data analysis without deep consideration of causal relationships can lead to diametrically opposite conclusions for decision-making.

2.3.2. The Effect of Religion on Economic Growth

The second example involves empirical analysis of the effect of religion on economic growth. Professor Robert Barro of Harvard initially showed through data analysis that religious participation had a positive effect on economic growth. However, Professor Cristobal Young, then a doctoral student at Princeton, argued that Barro's analysis results were highly sensitive to even small changes in the statistical model. In particular, he showed that the instrumental variable usage was flawed and that results varied greatly depending on the analysis specification. This is a representative example of how sensitive data analysis results can be to data characteristics and statistical model specifications.

Even with the same data, researchers can consider various analytical models and include only some in their papers. In regression analysis in particular, results can change depending on the use of control variables. A paper with the somewhat playful title "I Just Ran Two Million Regressions" demonstrates in extreme terms how sensitive results can be to the choice of control variables. This study tested the robustness of factors affecting economic growth, showing that 37 of 62 causal variables identified in prior research had effects that were highly sensitive to control variable selection. Such results can be attributed to multicollinearity, omitted variable bias, reverse causality bias, and other factors. Although nearly half the variables showed robustness, these examples clearly demonstrate that control variable composition and statistical model specification can greatly affect regression analysis results.

2.4. Overcoming the Credibility Crisis and Design-Based Causal Inference

These examples made researchers aware of a reproducibility and credibility crisis in causal inference empirical research and data analysis. The question arose: "Can we really trust the results of data analysis, and do those results truly represent causal relationships?"

Against this backdrop, the importance of the Credibility Revolution was highlighted. The method economists chose to overcome this crisis was an approach based on the potential outcomes framework -- also known as the Rubin Causal Model -- that uses appropriate research design rather than statistical model specifications for causal inference.

This design-based causal inference lent great credibility to the causal interpretation of data analysis results and significantly elevated the status and role of empirical research and causal inference data analysis. This is the background of the Credibility Revolution, the most important generational change in the field of empirical research. The potential outcomes framework was covered in a separate session, and interested viewers are referred to that video.

2.5. The Rise of RCTs and Quasi-Experiments/Natural Experiments

The most effective research design for causal inference is the randomized controlled trial (RCT). While RCTs are not easily applied in social sciences, their use has been rapidly expanding due to the advantages of causal inference. The 2019 Nobel Economics laureates were also recognized for poverty alleviation research using RCTs, conducting randomized field experiments in India, Africa, and elsewhere to verify the causal effectiveness of various policies in public health, education, and financial services, greatly contributing to the design of micro-level poverty alleviation policies.

While randomized experiments for causal inference are being applied across various fields, there remain far more situations where experiments are impossible or unethical. In such cases, the credibility revolution introduced quasi-experiments or natural experiments -- approaches that are not experiments per se but resemble them. Graphs in economics journals show that the use of quasi-experimental/natural experimental methods (DID, regression discontinuity, instrumental variables, matching, etc.) has been rapidly increasing recently.

The credibility revolution is not limited to economics alone but is one of the biggest changes occurring across all social science fields. No one working in empirical research and data analysis can deny this revolution. The use of causal inference research is rapidly increasing in the most prestigious journals across fields, and the lecturer is confident that the growth has been even steeper in the past decade.


3. The Causal Inference Research Contributions of the Nobel Laureates

3.1. Applying Causal Inference to Real-World Problems

With this background, the contributions of the 2021 Nobel Economics laureates become clearer. It is difficult to say that they introduced the concept of causal inference or developed new methodologies per se. The Rubin Causal Model, natural experiments, instrumental variables, and other core frameworks had been used for decades. However, their contribution lies in having systematically applied these causal inference frameworks and methodologies -- which had been studied in econometrics and statistics -- to solving real-world problems, thereby establishing research designs and data analysis methodologies for causal inference. In summary, the greatest contribution of these three professors is "expanding the horizon of applying causal inference analysis to real-world problems." Following their research provides an overview of the major strands of causal inference methodologies currently used across economics and social science.

3.2. Professor David Card's Research: Dramatic Use of Natural Experiments

One of Professor David Card's most representative studies, likely familiar from media coverage, concerns minimum wage research. Minimum wage increases have been a constant source of debate in many countries. General economic theory held that minimum wage increases lead to employment decreases. Professor Card's famous paper was the first to challenge this conventional wisdom.

This study analyzed the effect of minimum wage increases on employment using the situation where New Jersey's minimum wage was raised from $4.25 to $5.05 in 1992. From a potential outcomes perspective, the causal effect could be estimated by comparing the employment rate in New Jersey after the minimum wage increase with the counterfactual employment rate that would have existed had New Jersey not raised its minimum wage. However, this counterfactual is never observable in reality, which we call the fundamental problem of causal inference.

The basic idea of a natural experiment is to find a comparable alternative that can substitute for the unobservable counterfactual. That is, to find a comparison group similar to New Jersey in every respect except for the minimum wage increase.

The authors' idea was to compare fast-food restaurants in Pennsylvania, which borders New Jersey. Specifically, they compared restaurants near the state border: they were geographically adjacent, so the neighborhoods and residents' characteristics were similar, as were geographic factors. Yet due to the legal boundary, fast-food restaurants in New Jersey experienced a minimum wage increase while those in Pennsylvania did not -- despite being adjacent.

Although no actual experiment was conducted, it was as if an experiment had been run: New Jersey became the treatment group affected by the minimum wage increase, and Pennsylvania became the control group unaffected by it. This type of research design is what we call a natural experiment.

The analysis showed that despite the minimum wage increase in New Jersey, there was virtually no change in employment compared to adjacent Pennsylvania, leading to the conclusion that minimum wage increases do not necessarily lead to employment decreases. This study is one of the most-cited pieces of research demonstrating the dramatic use of natural experiments for important real-world problems and supporting the idea that real-world data can differ from economic theory.

Another famous natural experiment by Professor Card concerns immigration research. Immigration and refugee issues are a major global topic. In 1980, approximately 120,000 Cubans fleeing Fidel Castro's socialism escaped to the United States in what is known as the Mariel Boatlift. This event caused an unexpected massive influx of immigrants into Miami, creating a natural experiment situation.

To analyze whether this immigrant influx took away jobs from native workers, from a potential outcomes perspective, one would compare Miami's employment rate after the Mariel Boatlift with the counterfactual employment rate that would have existed had the event not occurred. But history does not allow for "what if" assumptions, so this counterfactual is not observable.

The core of the natural experiment is to find comparable alternatives to substitute for the counterfactual and conduct comparative analysis. This study used Atlanta, Houston, Los Angeles, and Tampa -- cities unaffected by the Mariel Boatlift but socioeconomically very similar to Miami -- as comparison groups. The result was that immigrant influx did not lower native employment rates.

Without such a natural experiment, one could only compare the number of immigrants and employment rates across different cities. However, there are clearly other factors explaining why immigrants congregate in particular cities. Comparing cities with many immigrants to those with few would reveal many differences beyond immigrant numbers, making it difficult to distinguish whether employment differences were truly the causal effect of immigrants or selection bias from other factors.

But the Mariel Boatlift event affected precisely Miami at that point in time and had no effect on other cities. At least other cities were not affected in the short term. This created experiment-like conditions, enabling the construction of a natural experiment with Miami as the treatment group and similar cities unaffected by the event as the control group, allowing reasonable causal inference.

The lecturer emphasizes that there are no complex formulas or big data in these explanations of causal inference: The most important thing for causal inference is effectively constructing a research design that utilizes appropriate real-world situations to distinguish between groups affected and unaffected by the event.

Professor David Card and other early pioneers of natural experiments demonstrated through a series of studies that natural experiments are powerful tools for reasonably inferring causal relationships in real-world problems, leading many researchers to recognize the power of natural experiments and triggering the widespread adoption of these methodologies across economics and all social sciences. In other words, these can be evaluated as research that pulled the trigger on the Credibility Revolution. Simultaneously, as natural experiment research grew, the considerations and assumptions necessary for using natural experiments in causal inference were systematically organized and established.


4. Reinterpretation of Instrumental Variables and Integration with the Potential Outcomes Framework

4.1. The Concept and Role of Instrumental Variables

While we have discussed natural experiments, the instrumental variable (IV) cannot be overlooked as another major tool for causal inference. Simply put, an instrumental variable is a variable that is correlated with the causal variable but is independent of all other factors that could affect the outcome variable, except through the causal variable. While the detailed statistical concepts are not covered here, in brief, the condition that there is no correlation with other factors affecting the outcome variable is called exogeneity. Its opposite is endogeneity. We can only infer causal relationships for exogenous causal variables.

Generally, unless the causal variable is randomly assigned through an experiment, it will inevitably be correlated with other factors, containing both endogenous and exogenous components. Therefore, without an experimental approach, causal relationships cannot be properly estimated due to the endogeneity problem.

In this context, what is the role of the instrumental variable? It is to use the instrumental variable to estimate only the exogenous portion of the causal variable -- the part unrelated to the error term -- and then use only this exogenous portion to infer causal relationships. This is the two-stage least squares (2SLS) method using instrumental variables. The principle itself is straightforward.

4.2. The Relationship Between Instrumental Variables, Causal Inference, and Natural Experiments

You may understand that instrumental variables utilize the exogenous portion, but have you ever wondered what connection this has to the natural experiments mentioned earlier, and whether instrumental variables are truly appropriate tools for inferring causal relationships? The lecturer mentions this was their biggest question when studying instrumental variables and causal inference. This is because in the potential outcomes framework, randomized experiments, and natural experiments, causal relationships are clearly defined and it is clear what research design is needed to infer them. But with instrumental variables, causal relationships were not so clearly defined.

In fact, instrumental variables have a history spanning decades -- nearly 100 years -- and have been perceived as an econometric or statistical tool for unbiased estimation. If you learned about regression analysis in a statistics class, you may have heard the term BLUE (Best Linear Unbiased Estimator), which means that if exogeneity is satisfied, statistically unbiased estimation is possible. Ultimately, the instrumental variable is a statistical tool for such unbiased estimation.

This much is understandable, but what does unbiased estimation have to do with causal relationships, and how does this relate to natural experiments? The lecturer states that the greatest contribution of Professors Angrist and Imbens lies precisely here.

4.3. The Contribution of Professors Angrist and Imbens: Integrating Instrumental Variables into the Potential Outcomes Framework

As previously explained, the most representative approaches for causal inference are randomized experiments, quasi-experiments, or natural experiments based on the potential outcomes framework. The core concept of the potential outcomes framework is the Average Treatment Effect (ATE). Simply put, since potential outcomes or counterfactuals are unobservable, individual treatment effects cannot be known. Instead, under appropriate assumptions, causal relationships can be inferred by comparing the means of treatment and control groups -- this is ATE.

Professors Angrist and Imbens showed that instrumental variables can also be understood within this potential outcomes framework, ultimately playing a decisive role in integrating instrumental variables into the potential outcomes framework, through two key papers.

Understanding instrumental variables from a potential outcomes perspective has three major advantages:

4.3.1. Clarifying Conditions for Causal Effect Interpretation

The first is that it greatly contributed to clarifying the conditions under which instrumental variables can be interpreted as causal effects. From a statistical perspective, the assumptions of instrumental variables were mostly related to the unobservable error term and were thus somewhat abstract. Professors Angrist and Imbens proved that from a potential outcomes perspective, five conditions must be met for instrumental variables to be interpreted as causal effects. While not all of these conditions are statistically testable, they at least provided a clear framework with substantive content for evaluating the appropriateness of instrumental variables through logical means, playing a decisive role in more transparent and reliable use of instrumental variables.

4.3.2. Systematizing Instrumental Variables as Natural Experiments

The second advantage is that, as explained earlier with natural experiments like the New Jersey minimum wage increase or the Miami Mariel Boatlift, treatment and control groups are separated by some exogenous event or condition. This is called treatment assignment. In randomized experiments, treatment is assigned randomly. By systematizing the fact that instrumental variables are also one such treatment assignment mechanism, they clarified that instrumental variables are indeed a form of natural experiment. That is, they proved that instrumental variables are a means of causal inference that estimates the effect of treatment induced by the instrumental variable.

4.3.3. Introduction of the Local Average Treatment Effect (LATE)

Particularly important here is the third significance: they were the first to reveal that the estimate from instrumental variables is not the Average Treatment Effect (ATE) for the entire population, but rather the Local Average Treatment Effect (LATE) -- the effect on those whose treatment was induced by the instrumental variable.

When treatment is induced by an instrumental variable, considering potential outcomes, if we conveniently assume the instrumental variable takes only values 0 or 1, there are potential outcomes for whether one receives treatment when the IV is 0, and potential outcomes for when the IV is 1. Based on these potential outcomes, research subjects can be broadly classified into four types:

  • Always Takers: People who always receive treatment regardless of the IV value.
  • Never Takers: People who never receive treatment regardless of the IV value.
  • Compliers: People who receive treatment when the IV is 1 and do not when it is 0.
  • Defiers: People who do not receive treatment when the IV is 1 and do when it is 0.

These concepts could not emerge without interpreting through potential outcomes, so previously, such types were not significantly considered in instrumental variable usage. Since the role of the instrumental variable is to estimate the treatment effect induced by the IV, Always Takers and Never Takers -- who are not induced by the IV -- are not included in the IV estimation at all. Ultimately, what we estimate through instrumental variables concerns Compliers and Defiers.

To interpret this as a causal effect from a potential outcomes perspective, we need to be able to compare outcomes when treated versus when not treated. Since this is not an experiment, treatment cannot be directly assigned. However, if the instrumental variable can indirectly induce whether treatment is given or not, we can sufficiently perform causal inference through the instrumental variable, indirectly assigning treatment and control groups.

Professors Angrist and Imbens proved that Defiers -- who move opposite to the instrumental variable -- can cancel out or interfere with the causal effect in Compliers. Consequently, one of the important assumptions of instrumental variables is that Defiers must not exist -- the monotonicity assumption. This monotonicity assumption was one that people had not considered before the introduction of potential outcomes.

In conclusion, under the assumption that there are no Defiers, instrumental variables can estimate the causal effect of treatment induced by the instrumental variable among Compliers, which the authors defined as the Local Average Treatment Effect (LATE). In summary, the greatest contribution of Professors Angrist and Imbens is integrating the widely used concept of instrumental variables into the potential outcomes framework, clarifying that instrumental variables are a treatment assignment mechanism that induces treatment, and proving that they are a tool for inferring causal effects among Compliers. Through this, we can now understand instrumental variables as another form of quasi-experiment or natural experiment that induces treatment.


5. Professor Joshua Angrist's Instrumental Variable Research Examples

5.1. The Effect of Compulsory Education Regulations on Income

One of Professor Angrist's representative studies used compulsory education regulations requiring school attendance until a certain age as an instrumental variable, exploiting the fact that differences in birth quarter lead to differences in years of schooling. In the United States, which uses age-based enrollment (unlike some other countries), the school year begins in August or September. Students born in spring or summer are already older than those born in fall or winter when the new school year starts. Therefore, in a given grade, students born in the 1st or 2nd quarters become legally eligible to drop out (having reached the compulsory education age), while 3rd or 4th quarter-born students cannot yet drop out because they have not reached the age threshold.

As a result, as shown in graphs, average years of education differ by birth quarter. Students born in the 3rd or 4th quarter tend to have slightly more years of education on average. This famous study analyzed how these group differences translate into income differences using instrumental variables.

This study can be summarized as using birth quarter as an instrumental variable to estimate the causal effect of compulsory education on income only for Compliers -- those whose years of schooling could be affected by the compulsory education regulation.

An important point here is that what instrumental variables estimate is LATE. Therefore, interpretation of causal relationships requires care. What can be estimated with the instrumental variable is neither the effect of birth quarter on income, nor the causal effect of years of education on income in general. While years of education is the causal variable, recalling the role of the instrumental variable, it estimates the effect of differences in education years induced by the instrumental variable (birth quarter) and the compulsory education regulation. Ultimately, the instrumental variable estimates the causal effect of the compulsory education system on income -- not simply the effect of a group variable.

Moreover, this is not estimated for everyone. More precisely, it can estimate causal effects only for Compliers -- those whose years of schooling could be changed by the compulsory education regulation. This does not mean LATE is meaningless. Considering the purpose of the study -- evaluating the effectiveness of compulsory education policy -- it is appropriate to analyze and infer causal relationships only for Compliers, those whose education years could be changed by the regulation. The biggest implication of the Angrist-Imbens papers is that instrumental variable estimates should not be over-interpreted but should be properly interpreted considering LATE.

5.2. Solving Noncompliance Problems in Randomized Experiments: Vietnam War Service Research

Another study by Professor Angrist demonstrates an important use case for instrumental variables: instrumental variables can also serve as tools for solving the double noncompliance problem in randomized experiments. Even when treatment and control groups are randomly assigned through experiments, people cannot always be forced to comply. Treatment-group individuals may fail to receive treatment, and control-group individuals may obtain treatment through other channels. This situation is called double noncompliance, and by using treatment assignment as an instrumental variable, it was shown that for those who comply with treatment -- the Compliers -- causal relationships can be effectively estimated.

A representative example is the study on the causal effect of Vietnam War service on income. During the Vietnam War, the U.S. used a lottery based on birth dates to determine draft priority. This was purely randomly assigned. While most people followed the results, there were certainly exceptions: Always Takers who would enlist regardless, Never Takers who would refuse regardless, and Defier types who would resist the draft if prioritized but enlist if not. However, since Defiers are unlikely to be numerous in practice, the monotonicity assumption is reasonably satisfied.

Using instrumental variables, the causal effect of Vietnam War service on income could be estimated at least for Compliers -- those who complied with the lottery-based draft. This is LATE. Although it is LATE, if the primary purpose is to analyze the impact of service and develop appropriate compensation policies, analyzing the causal relationship among Compliers is sufficiently meaningful.

5.3. Other Causal Inference Research Examples (Recommended Videos)

Another example is research on the effect of social institutions on economic growth, which is an excellent example of natural experiments and instrumental variable usage. The lecturer notes it was covered as the first topic in the 'Causal Inference Research Gems' series and recommends watching the relevant videos to see how natural experiments and instrumental variables are creatively utilized.

Due to time constraints, instrumental variables could not be covered in detail, but since the purpose is to re-examine the Nobel laureates' contributions to causal inference, those interested in more detail are directed to separate sessions on instrumental variables.


6. Another Contributor to the Nobel Prize: Professor Alan Krueger

There is one person who absolutely cannot be omitted from these studies: the late Professor Alan Krueger of Princeton University. As shown in the representative papers discussed, Professor Krueger conducted numerous studies with Professors Angrist and Card, making him arguably another contributor to this Nobel Prize. Tragically, he passed away several years ago. Due to the Nobel Prize regulation that only living individuals can receive the award, he was absent from the laureate list, but there is no doubt that had he been alive, he would have shared this Nobel Prize in Economics. The lecturer pays brief tribute.


7. Limitations and Challenges of Design-Based Causal Inference

No research can be perfect, and there are indeed various criticisms of design-based causal inference. Rather than ending with only praise for the Nobel contributions, examining limitations provides a more balanced perspective.

7.1. Criticism of External Validity

The most representative criticism of design-based causal inference concerns external validity. That is, there is skepticism about whether results from a research design in a specific situation can be applied to other situations. For example, can results from fast-food restaurants in New Jersey and Pennsylvania in the early 1990s be directly applied to other countries? One cannot help but be somewhat skeptical.

This is why, in design-based causal inference research, meta-analysis -- the synthesis of causal inference results -- is particularly important. Meta-analysis was covered in a summer session, though unfortunately the video was not made public.

7.2. Criticism of Research Topic Bias

The second criticism is that research tends to be concentrated on topics where randomized experiments, quasi-experiments, or natural experimental designs are feasible. The lecturer personally considers the most representative criticism of this tendency to be the criticism of the book 'Freakonomics,' one of the world's best-selling and most widely known popular economics books. In it, the authors present fascinating examples revealed through various causal inference and natural experiment studies.

While many praise the book for popularizing economics, data analysis, and causal inference research, some critics argue that recent economists have been "focusing on very small but clever and cute work rather than concentrating on truly important problems."

The lecturer personally believes that the recent trend does show some concentration on research where randomized or natural experiments are feasible. However, this is more a criticism of researchers' attitude rather than of causal inference itself, and is a challenge all researchers must address.


8. Conclusion: The Present and Future of Causal Inference Research

Nevertheless, many researchers today are using causal inference research designs to solve important real-world problems. For those curious about how causal inference research can be applied to real-world problem solving, the lecturer recommends Harvard University's introductory economics course -- known as the most popular economics course at Harvard, primarily taken by first-year undergraduates and not overly difficult. This course presents many textbook-quality examples of research design for causal inference, which the lecturer personally considers their favorite course.

Additional recommended readings for those wishing to study the topics covered further include:

  • 'Mostly Harmless Econometrics' and 'Mastering Metrics' by Professors Joshua Angrist and Jorn-Steffen Pischke. These two books have nearly identical content and can be considered textbooks containing all the philosophy and spirit of the credibility revolution. The latter includes somewhat more detailed explanations of the same content.
  • 'Causal Inference' by Professor Guido Imbens and Professor Donald Rubin (who originated the Rubin Causal Model). This book is also highly recommended for studying causal inference.

The annual causal inference summer session doctoral course is also aligned with all the content discussed today. The lecturer believes it may be virtually the only Korean-language lecture material on research design for causal inference at the center of the credibility revolution. They hope the summer session materials and videos will be helpful to those interested in further study.

Related writing