Investigative Skepticism Versus the Mass Media

Relationship violence against men

We are constantly being bombarded with messages from newspapers, television, blogs and social media sites like Facebook and Twitter about alleged facts, recently published scientific studies and government reports. With the knowledge that the mass media often get things wrong when it comes to science, how can you separate the signal from the noise?

One popular approach is to check what many different news organizations has to say about the issue. However, this ignores the fact that many websites just rewrite stories they have seen on other websites. Some even go so far as to just copy/paste press releases. In the fast-paced world we live in, getting the “information” out there as fast as possible has apparently come to triumphs scientific and statistical accuracy. This problem is aggravated in cases when the misinterpretation fits snuggly within a particular political or philosophical worldview (e. g. some conservative groups and climate change denialism). Another approach is limiting yourself to only reading news from websites that fit with your own positions. However, this leaves you open to considerable bias. The classic example is anti-immigration race trolls who only read “alternative media”, which tend to twist a lot of the news item they publish to fit with their agenda. A third approach is a combination of the two above: only believe things that news organizations with radically different stances agree on. The downside to this is that it almost never happens with issues that are scientifically uncontroversial, but controversial in the eye of the public (climate change being the obvious example).

This post will outline an explicit investigative method based on scientific skepticism designed to find out the truth behind popular stories on science. To illustrate it, a case study of mass media treatment of two new Swedish studies on relationship violence against men will described. Key aspects to this method include:

Strong attention to certain cognitive biases: when reading news reports on science, it is vital to keep certain logical fallacies and cognitive biases in mind. Especially important are the single study fallacy, confirmation bias and sensationalism. The single study fallacy occurs when an individual study is given too large importance. Because they can always be flawed, one should not trust the results of a single study. Instead, look at the convergence of evidence across multiple studies (but keep publication bias in mind). In this context, confirmation bias occur when the alleged study results fits with the overarching belief system of the newspaper or the individual journalist. Finally, sensationalism is the bias of overstating the impact or relevance of a story to get views. Essentially, keep the “How could they have botched this reporting?” in mind whenever you read about science in the media. Although this approach might appear to suffer from confirmation bias itself (i.e. always looking for mistakes), it is meant to counters the prevalence confirmation bias in the other direction.

Degrees of confidence: consider the situation as a hierarchy of confidence: the less credible the reporting source, the less confidence should you have in its accuracy. Have a low degree of confidence towards stuff making rounds on social media or found in tabloids. If you can find a press release from the university performing the study, assign a somewhat higher degree of confidence to the story, but take steps to compare the way the tabloid presented the study and how the press release described it (are there are clear misinterpretations in the way the tabloid carried the story?). Try to find the original study and compare both with the way social media, the tabloids and the press release cover it. Expand the search towards other scientists commenting on the study, both printed and in the blogosphere. Finally, summarize all the information.

Evaluation of data analysis: do not fall for claims like “there is a difference between group X and group Y” in some area. Always demand to get information about the effect size (how big is the difference?), confidence intervals (how precise is the measurement?) and an interpretation of the data in the relevant scientific context. After that, evaluate if the conclusions made by the study authors, the university, the mass media or social media fits with competent evaluation of the actual data. Classic problems include the reporting of a difference that turns out to be practically negligible or the reporting of no difference just because the sample size was too low so the statistical power was too low to detect a difference.

Search engine skills: being able to find newspaper pieces, blog posts, press releases, commentary and original studies will to a large degree depend on your skills with a search engine or being able to outsource those tasks to other people on social media sites. It is not just about finding as much information as possible, but about finding the relevant information. This can be done by actively formulating questions as the search procedure continues (i.e. what was the sample size? what was the response rate? etc.).

Let us move on to the case study that will illustrate this approach. It is about the media reporting on two new Swedish studies on relationship violence towards men. The media is carrying the story as if the studies show that relationship violence against men is more common than relationship violence against women. In reality, the studies had a too low response rate and the final data samples were probably not representative for such a claim to be justified by data collected. Furthermore, the observed differences were very small (either 0 or 3 percentage points) to be of practical relevance. In essence, the rational conclusion is that two small and limited studies observed comparable incidences of relationship violence towards men and women, but the absolute incidence and incidence comparisons asserted by these studies cannot be trusted in isolation.

The case study

As I was browsing a Swedish tabloid (Aftonbladet), I came across a short news item that they had taken directly from a press agency that specialize in putting out short releases very fast (TT). Here is the full text of the statement (my translation):

Men more exposed to relationship violence

Gothenburg. According to two new studies at Sahlgrenska academy regarding relationship violence, slightly more men than women have been pushed or beaten by their partner during the last year.

– It is surprising, the usual perception has been that men are not exposed. But both women and men use violence, says Gunilla Krantz, professor at the department of public health and community medicine, the lead author of the studies.

Out of a total of 1400 respondents, eight percent of women report that they have been exposed to physical violence by their husband or partner during the past year. The corresponding figure for men in the surveys is eight and eleven percentage, respectively.

The first time I saw that the tabloid had a news item with this title, I became suspicious. After all, it is just a tabloid, and tabloids often get the science wrong. The moment I see the headline, I automatically asked myself: “How big is the actual difference?” and “With what precision?”. Fortunately, the note provides us with the observed differences: it was 0 percentage and 3 percentage points, respectively. Although no information about precision is provided, I tentatively interpreted the differences as being somewhere between negligible and small in the sociological context. I also noted the stated sample size and thought that it was sufficient (but since the studies used surveys, I was annoyed that they did not report the response rate).

At this point, I wrote down key pieces of information (e. g. topic, lead author, actual differences, sample size and so on) that would help me find additional information that I wanted (e. g. university press releases, original study, response rate, precision and so on). This step is often underrated and you would be surprised how much help a sample size and a lead author name can do when trying to find additional information.

By using a search engine, I managed to track down the university press release (in English). It used the same misleading title (“Male Victims of Partner Violence Outnumber Females”), but it provided me with a lot of additional information, such as:

1. More women than men reported using violence in self-defense.
2. Female victims had more severe health outcomes than male victims.
3. More women than men report being the victim of sexual abuse by their current partner (10% versus 3.5% prevalence, 3% versus 0.6% incidence for the past year).

(when you were reading the above, did you instantly think “what is the actual differences?” and “what is the precision of that estimate?”)

The university press release did not answer any of the previous questions about response rate or the precision for the estimates that I had written down, but provided the references to the original studies.

Nybergh, Lotta, Taft, Charles, & Krantz, Gunilla. (2012). Psychometric properties of the WHO Violence Against Women instrument in a male population-based sample in Sweden. BMJ Open, 2(6). doi: 10.1136/bmjopen-2012-002055

Lovestad, Solveig, & Krantz, Gunilla. (2012). Men’s and women’s exposure and perpetration of partner violence: an epidemiological study from Sweden. BMC Public Health, 12(1), 945.

Fortunately, both papers were published in open access journals, so their full text is available for free. Before reading these studies, my two main questions were: “what was the response rate (i.e. related to representativeness of the sample)?” and “What was the precision of the estimate?”

Nybergh and colleagues (2012) reported a sample of n = 1009, and a raw response rate of 45.4% (n = 458). However, some respondents did not answer any of the questions on violence, so they were excluded. The final sample was n = 399. That is a rate of 39.5%. The problem with a low response rate is that the respondents may systematically differ from non-respondents, and so bias the collected data.

This study estimated that the incidence of physical abuse against men from their intimate partners was 7.6% with a 95% confidence interval between 5.0% to 10.2%. The size of the confidence interval (5.2 percentage points) was close to the absolute size of the estimate (7.6%). In the section on methodological limitations, the authors wrote:

The overall non-response rate was high (54.6%) and response rates were lower among young men, unmarried men, men with a lower annual income and men born outside Sweden, which compromises the generalisability of our results. Given that previous studies have found some of these groups to be associated with higher levels of IPV among men, our study may have underreported exposure to IPV. Also, the earlier-in-life estimates may have been underestimated due to a minor detail on the questionnaire layout. Furthermore, the subsample of respondents who answered both the VAWI and the NorAQ is small, which limits our ability to draw conclusions or generalise to the target population.

Little is known about men’s response patterns in surveys on violence exposure perpetrated by their intimate partners. A recent review of gender differences in self-reported IPV cites some studies in which men underreport their experiences of IPV, whereas another review found studies pointing to the contrary. Future research investigating men’s patterns and reasons for responding or not responding to a postal survey on IPV, especially in a Nordic context, would shed more light on these matters.

In other words, due to the low response rate and non-representativeness, the observed incidence cannot be considered credible in isolation.

Similar issues face Lovestad and colleagues (2012). They looked at a sample of n = 1007 and the raw response rate was 49.6% (n = 499). After excluding individuals who did not respond to any of the questions about violence, n = 424 (42%) individuals remained. Here is how this study describes the response rate limitation:

Among the limitations, the most notable is the rather low response rate and the internal drop- out rate related to the violence items, especially for men. Since the majority of external and internal drop-outs were unmarried, it might be that they had no experience of intimate relationships and therefore were not further motivated to answer the violence-related items. Further, those individuals most exposed and/or perpetrating violence might have been reluctant to fill in the questionnaire because of shame or fear of being identified, which has been found in an earlier study. We believe also that this questionnaire may not be ideal for data collections through mailed questionnaires as the violence items are profuse and rather detailed in content. Another limitation is that the low number of respondents reduced the power of the analysis.

In other words, there are reasons to suspect that the final sample data is not representative. What about precision for the estimate? As it turns out, this study does not even bother to calculate confidence interval for the two incidence estimates. Another interesting piece of information is that more women (15.9%) than men (11.0%) were exposed to physical abuse earlier in life. Although precision estimates are missing, this effect size was not reported by any newspapers or university press release.

A final consideration is that both journals have a low impact factor: 1.6 for BMJ Open and 2.08 for BMC Public Health.

None of the mass media reports mentioned the problem with low response rate or evaluated the differences in a sociological context. None mentioned that the life-time prevalence estimate was larger for women than for men and that this difference was larger than the incidence differences that was always mentioned. They did not mention the fact that more women probably used violence in self-defense, that female victims probably had more severe health outcomes and that more women were probably sexually abused by their current partner.

Comparison with an ideologically driven interpretation

In summary, these papers suggest that the frequency of physical abuse towards men and women are comparable. The absolute figures and the observed differences in incidence (8% vs 8%/11%) cannot be taken at face value. Let us compare this kind of investigative skepticism versus ideologically motivated reasoning. The example we are going to look at is a forum discussion at the right-wing Swedish Internet forum called Flashback. Here are a few examples of their commentary (my translation):

What does the feminist mafia say about this survey? Had you expected that we almost, with men at the disadvantage, have equality when it comes to relationship violence?

No attempt at a minimal critical analysis of the data. Soon, the conspiracy theories crop up:

It is the end of 2013 and the truth about the developmentally disruptive feminism starts to trickle through the victim facade. Already a few years ago a report was released about violence in intimate relationship. But since the only talent feminism has is its fatal weakness, the victimhood; they cannot possibly handle a non-equal view of our gender. Now comes the cracking, soon feminism will collapse to a pathetic residue of low-quality misandry; that no one will want to avow. But hahaha…their so called gender research will testify to a time, when research could be so arbitrarily partial to victimhood that they completely missed reality.

If they just had taken a few moments to actually read the papers then they would not have to embarrass themselves like that.

There are a couple of limitations with this approach. The first is that it takes much more time than reading the tabloids and it is probably not possible to look up all the information presented. A second limitation is that some papers are behind a paywall. However, some papers may be found online using a search engine and one can always try to go to a university library, email the author or ask on social media like Twitter.

emilskeptic

Debunker of pseudoscience.

%d bloggers like this:

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!