If you’re planning on attending graduate school or are in the midst of a program, you might be able to get away without taking any Statistics course if it doesn’t really apply to your field. This is especially the case if you’re pursuing something that is in the realm of an MFA, or a Master of Fine Arts.
But other disciplines, ranging from public health, international studies, social work, and the life sciences, all involve scientific experimentation to some degree, case work study, and that means statistical analysis. One of the problems inherent with some programs is that while it’s not mandatory to take a Statistics course, statistics often show up in some way or another in publications or journal articles. Students who are in the middle of writing a term paper might panic when they can’t understand some sort of analysis that they need to understand in order to write a comprehensive and coherent paper.
The issue here is that you might not need to be a Statistics expert in order to de-code statistical data in a paper, but you do need to at least have a working knowledge of statistical terms in general, to navigate term papers and ultimately have an all-inclusive understanding of your discipline.
Thus having a statistical background does wonder for students who want to continue on in their careers in research oriented roles or even PhD work. But getting to that point requires some learning and strategic insights for managing statistical information.
Working Backwards if Necessary
During a busy semester, students will have to read many case studies and peer reviewed articles to get a grasp of subject matter interests and understand how research is working within a particular field. To be honest, it’s too much work to read every single article assigned, especially while taking multiple classes with respective deadlines, and if you try to do so, you will likely become an anxious person.
One strategy that should be employed when trying to get a grasp on the subject matter is called working backward and using a deductive approach. Start by reading the abstracts of a research paper, the introduction, and then the conclusion to get the list of what the research paper is trying to accomplish. Once you are able to summarize what the purpose of the research is, then go back and read the methods section.
The reason for going about it this way is so you have a foundation of what the paper is about, and the conclusions that have been made, before you get into the hard statistical section of the paper. The methods section is arguably the most difficult section of some research or peer reviewed papers, basically because many variables, acronyms, and elements of the process appear all at once. In order to piece it together, you need to be an investigator, and trace how each variable is relating to the conclusion, outcomes, and possible impact. Once you are able to make associations between inputs, variables, and outcomes, the process of deciphering the methods section should be manageable.
Making use of Key Terms for a Smoother Analysis
In doing so, there are some key terms and concepts that are essential for getting by while trying to interpret and have feedback for studies that are published in academic journals. You might not be able to understand every single piece of statistical data in a study (only the peer reviewers are usually able to do that). But still, a huge component of being a researcher is the ability to pull out some vital information and give feedback as to where the experiment went wrong, or if questionable methods were used to determine something.
As such, here are some basic terms that need to be understood before embarking in a critique of statistical methods:
Populations and sample sizes - Being able to understand population sets and samples being used is pretty foundational. Also knowing who is not part of the sample, or a control group will help students draw a better mental map of how population samples differ from some other groups in society, and why they are being used for a particular purpose.
Length of an experiment - How long is the experiment being conducted? For what reason? How many months or years? What are some problems with the associated length, if any? How does time affect the accuracy of an experiment?
Normal Distribution - Sometimes termed “bell curve” statistics, a normal distribution is a probability function used to determine how certain variables in an experiment or sample deviate from the mean, or average in question. Test scores, for example, are a good way to understand normal distribution. The majority of students might score a B on a test, but a select few will be in a different quartile of the distribution chart, either with very high or low scores.
While just an introduction, these are key terms to think about when looking at peer reviewed academic work. Also, keep in mind that just because you’re not an expert in statistics doesn’t mean you need to be afraid of statistical analysis. In many ways, statistics in research is about making associations. Make sure to read the abstracts, get the whole picture of what is going on in an experiment, and from there start to deduce as much information as you can.