Techniques for Evaluating Methods and Quality of Evidence in Relation to a Research Question - StudyPulse
Boost Your VCE Scores Today with StudyPulse
8000+ Questions AI Tutor Help
Home Subjects Extended Investigation Evaluate methods and evidence

Techniques for Evaluating Methods and Quality of Evidence in Relation to a Research Question

Extended Investigation
StudyPulse

Techniques for Evaluating Methods and Quality of Evidence in Relation to a Research Question

Extended Investigation
01 May 2026

Techniques for Evaluating Methods and Quality of Evidence in Relation to a Research Question

The quality of any research finding depends entirely on the methods used to produce it and the quality of evidence collected. This KK focuses on how to evaluate methods and evidence in relation to a specific research question — not in the abstract, but in the context of what the investigation is trying to answer.

Why the Research Question Is Central

Methods and evidence cannot be evaluated in isolation. A survey with a 50-person sample might be strong evidence for an exploratory qualitative question and completely inadequate evidence for a claim about population-wide trends. Always evaluate methods and evidence by asking: “Does this method, and this evidence, adequately address this specific research question?”

KEY TAKEAWAY: The evaluative standard is always the research question. A methodology is “appropriate” or “strong” only relative to what the investigation is trying to find out. Build this relational analysis into every methodological evaluation you write.

Evaluating Methods in Relation to the Research Question

Alignment: Does the Method Address the Question?

Question Type Appropriate Methods Inappropriate Methods
“How prevalent is X?” Large-scale survey Single interview
“Why does X happen?” Interviews, focus groups, case studies Survey with closed questions only
“Does X cause Y?” Controlled experiment Correlational survey alone
“What is the lived experience of X?” In-depth interviews, ethnography Quantitative survey
“How has X changed over time?” Longitudinal study, document analysis Single cross-sectional survey

Internal Validity

Does the method actually measure what it claims to measure in this context?
- Are the instruments validated for this population?
- Are key concepts operationalised consistently with the question’s intent?
- Are confounding variables controlled or accounted for?

External Validity

Can the findings be generalised to the target population the research question implies?
- Is the sample representative of that population?
- Were the conditions of data collection realistic?

Reliability

Would the same method produce consistent results if repeated?
- Are procedures standardised?
- Are instruments reliable (internal consistency, test-retest reliability)?
- Is inter-rater reliability established for qualitative coding?

EXAM TIP: A three-part answer to “evaluate the method in relation to the research question” should cover: (1) whether the method type is appropriate, (2) a specific strength in how it was implemented, and (3) a specific limitation that affects the conclusions drawn.

Evaluating Evidence Quality

After evaluating method design, evaluate the quality of the evidence produced:

For Quantitative Evidence

  • Sample size: Was it adequate? Use power analysis context if available.
  • Effect size: Is the observed effect practically meaningful, not just statistically significant?
  • Confidence intervals: How precise are the estimates?
  • Replication: Has this finding been independently reproduced?

For Qualitative Evidence

  • Richness: Is the data detailed and nuanced enough to support the interpretations?
  • Saturation: Was enough data collected that new themes stopped emerging?
  • Negative case analysis: Were examples that didn’t fit the emerging pattern actively sought?
  • Reflexivity: Did the researcher acknowledge and manage their own influence on the data?

Assessing Evidence Against the Conclusion

Even high-quality evidence may not support a particular conclusion if:
- The evidence is from a different population than the conclusion claims
- The timeframe of the evidence does not match the timeframe of the claim
- The operational definition used differs from the concept in the conclusion
- The effect observed is too small to be practically significant for the claim made

This is underdetermination — the evidence underdetermines the conclusion, meaning the conclusion goes beyond what the evidence can establish.

Evaluating Your Own Methods and Evidence

In your written report’s evaluation section, apply all of the above to your own investigation:
- Were your methods appropriate for your specific research question?
- What are the key limitations of your evidence?
- What would stronger evidence look like?
- What alternative explanations remain after considering your evidence?

Assessors look for honest, specific self-evaluation — not vague disclaimers (“the sample was small”) but substantive analysis of how limitations affect the reliability, validity and generalisability of your conclusions.

APPLICATION: Write a methods evaluation section for your own investigation before you write the results section. This forces you to identify limitations while they can still be addressed, and it produces the reflection required for a high-scoring written report.

COMMON MISTAKE: Confusing methodological limitations with failures. Having a small sample does not mean your research is worthless — it means your conclusions must be appropriately qualified. The error is not having a small sample; the error is claiming population-level conclusions from a small-sample study.

Table of Contents