Debunking Misuse of StatisticsMiscellaneous

A Voice for Men on Rape Statistics: Confusing Life-Time Prevalence with Incidence

bars

In the online conflict between certain specific men’s rights activists (MRAs) and certain specific fringe radical feminists, no part have their intellectual integrity fully intact. Elements on both sides misunderstands statistics and basic biology and they also appeal to pseudoscience to justify their political ideology. For example, some fringe radical feminists, particularly those with postmodern inclination, may devalue biological partial explanations or subscribe to the notion of a blank slate. Similarily, some men’s rights activists propose superficially plausible evolutionary accounts of the origin of gender roles from the 1940s, but in the end, cannot provide any scientific evidence for their highly speculative accounts (making them a just-so-story).

I think that gender equality is a moral necessity, but it is terribly tedious and tiresome to read the spiraling conflict between extremists, especially when they make embarrassing rookie mistakes.

One such mistake is made by the blogger Phil in Utah over at the A Voice for Men blog in a blog post about rape statistics that confuses life-time prevalence with incidence. Phil in Utah writes:

Statistic: “1 out of every 4 women will be raped in her lifetime.”

Truth: Ah, here’s the doozy. I’m sure we’re all familiar with the source of this statistic: a study by Mary Koss that has been discredited countless times. Around three-quarters of the women she identified as having been raped did not consider themselves victims of rape, and almost half of them had sex with their supposed attackers after the event identified as a rape had occurred.

I do not really know enough about the Mary Koss study to make an informed argument, but surely, rape has to be defined as objective as possible and not solely be based on personal opinion? So the argument that some of the women did not consider it rape, therefore it should not be counted as rape, seems wrong. Obviously you can be subjected to a crime even though you are not aware that it is considered a crime. A rose by any other name…

Let us look at some rape statistics from the CDC. In their National Intimate Partner and Sexual Violence Survey (NIPSV), they reached the following conclusion with regards to rape prevalence among women. From the executive summary:

Nearly 1 in 5 women (18.3%) and 1 in 71 men (1.4%) in the United States have been raped at some time in their lives, including completed forced penetration, attempted forced penetration, or alcohol/drug facilitated completed penetration.

Now, rape prevalence will differ depending on how inclusive the definition of rape is (varies between countries), but according to the definitions used by the CDC, it is around 18% of women in the U. S. Although not exactly 1 in 4 (25%), it is fairly close. With that in mind, let’s see how Phil in Utah tackles U. S. rape prevalence:

So, what do statistics collected from non-feminist sources say? Well, let’s try the FBI statistics. According to an FBI report, which did not account for differing definitions of rape, whether or not the rapes were convicted, or whether or not female-on-male rape was included, the United States had a rate of 29 reported rapes per 100,000 people in 2009. That’s not going to get us to 25%, but I’m feeling generous, so let’s look at the country with the highest rate of rape in the past decade–South Africa, with a rate of 116 rapes per 100,000 people in one year. Percentage wise, this is .1% of the population. Now, I’ll admit that I’m worse at math than anything else in the world, but even I know this isn’t even close to “1 in 4″.

Here is the problem with that argument: the CDC looked at rape prevalence (actually, life-time prevalence, see below), whereas the data cited from the FBI report (and the South Africa argument) looked at rape incidence. This seems like a minor quibble, but it is of utmost importance. Here is why.

Prevalence and incidence are two key concepts within epidemiology and medical statistics. Let us look at how these two concepts work when it comes to, say, HIV. The prevalence of HIV tells you the total number of HIV cases (usually in % of total population). The incidence of HIV tells you the number of new cases of HIV arising during a certain time period. As we can see, it is really important to not confuse these different metrics. For HIV prevention, we want to focus on getting the incidence down. Getting the incidence down is one way to prevent the prevalence from rising, but obviously we do not want HIV patients to die, so we cannot just focus on trying to get the prevalence of HIV down with any means necessary.

As an added complication, life-time prevalence of rape looks at what proportion of the population that has been raped sometime in their life. This means that we must separate discussions of the number of new cases of rape per time unit (rape incidence) with discussions about the proportion of the population that has been raped some time in their life (life-time prevalence of rape). Technically it is the proportion that has been raped some time during their life up to the time period the study is being carried out. Obviously, scientists cannot draw on non-existent, future statistics.

So to sum up, the reason that the argument laid out by Phil in Utah fails is that it confuses prevalence with incidence. Phil in Utah seems to understand this when making the following rebuttal to a perceived counterargument:

“But wait!” the feminists are saying, “Most rapes are never reported to the police!” Well, I’ve heard a number of different figures on just how many. Some say 45%, some say 60%, and some even say 80%. But hey, I’m feeling EXTREMELY generous, so despite the fact that feminists are basing these numbers off evidence that is dubious at best, I’ll go with the highest estimate. .1 times 5 is…half of one percent. In other words, one-fiftieth of what feminists claim it is.

Now, I hear them whining that I missed the key phrase “In their lifetime”. Okay, since empirical data shows that rates of rape drastically decrease after the victim turns 45, whether they are male or female, in prison or out, I’ll just be accounting for a 30-year window. Sorry, feminists, but even my generosity has its limits. I’m not going to pretend that the wackos who rape grannies aren’t extreme outliers. This means that 15% of South African women will be raped in their lifetimes. A grisly figure to be sure, but then again, this is South Africa we’re talking about–it has the second-highest crime rate in the world. The rate of rape in the U.S. is one-quarter of that, so in our most generous of moods, it is correct to say that 3.75% of women will be raped in their lifetimes. I’m puzzled as to how that can be mistaken for 1 in 4.

Sadly, Phil in Utah falls prey to the same problem. He is making the argument that since the rape incidence in U. S. is 1/4 of the rape incidence in South Africa, the rape prevalence in the U. S. has to be 1/4 of the rape prevalence in South Africa. This is of course wrong, as there is nothing that says that changes rape incidence in the U. S. follows change in rape incidence in South Africa. The latter has been strongly politically instable for decades, so that is a clear confounder.

The life-time rape prevalence in South Africa seems to be calculated by Phil in Utah as summing up the rape incidence in South Africa (together with a modifier for rapes that go unreported) over 30 years (0.5*30 = 15). However, this is not the same as life-time rape prevalence, which was, as we saw, what proportion of the population that has been raped sometime in their life.

Conclusion

Life-time rape prevalence for women in the U. S. is around 18% according to the CDC.

Phil in Utah makes the following statistical errors in his reasoning: (1) confuse (life-time) prevalence with incidence, (2) confuse (life-time) prevalence with the sum of incidence over 30 years and (3) thinks that 1/4 of the incidence implies 1/4 of the prevalence (really a version of the first statistical error).

To make convincing statistical arguments, one should preferably understand the relevant statistics. Otherwise, one risks making embarrassing statistical errors.

emilskeptic

Debunker of pseudoscience.

36 thoughts on “A Voice for Men on Rape Statistics: Confusing Life-Time Prevalence with Incidence

  • Oh how interesting. Causes me to remember the time I was raped in the Air Force, and then the time my birth mother told me she couldn’t tell me who my father is “because it was a rape.” That’s if I don’t think of the stuff my adoptive dad used to like doing out in the woods, first. There’s also the rapes that occurred to other girls on campus – oh wait, how many was it – while I was in college … Doesn’t feel like “incidence” or “prevalence” – how strange…I’m guessing from your thoughtful and sensitive article that you understand each time when people talk about rape that there might be victims right next to you…

  • Oh, and none of the first three events I mention were ever reported to police…but I can’t speak for others because I’m not one those radicals so I’m just going to end by reporting that I”m unsubscribing, bye, ’twas nice while it lasted….

  • I disproved a common anti-feminist “argument” that is used to minimize the problem of rape…and you think I was being insensitive to rape victims?

  • The “1 in 71 men have been raped” stat from the CDC survey doesn’t tell the whole story. It defines “rape” as the attacker penetrating the victim, which excludes women who use their vagina to rape a man (rape by envelopment) which is counted as “made to penetrate”. The very same survey says “1 in 21 men (4.8%) reported that they were made to penetrate someone else,” which is far more than 1 in 71. Also, the study says that 79.2% of male victims of “made to penetrate” reported only female perpetrators, meaning they were raped by a woman.

    The above, lifetime stats do show a lower percentage of male victims (up to 1.4% rape by penetration + 4.8% made to penetrate = 6.2%) than female victims (18.3%) although it is far more than the 1 in 71 you stated. However, if you look at the report’s stats for the past 12 months, just as many number of men have been “forced to penetrate” as women were raped, meaning that if you properly define “made to penetrate” as rape, men were raped as often as women.

    • You did not actually address the argument I made in this blog post. Do you understand the difference between prevalence and incidence? Do you accept that the blogger Phil in Utah confused these two metrics in his arguments?

      The 1 in 71 figure is not something I stated, but something I quoted from CDC. I also, after the quote, pointed out that this figure depends on the definition of rape. Did you manage to read that far?

      The report does state that the life-time prevalence in being made to penetrate someone else is 4.8% and if we add that to 1/71, we do get 6.2%. This, however, is still a far cry from around 18%. In reality, the life-time prevalence of rape for men is irrelevant for the accurate characterization of the life-time rape prevalence of women. The fact that around 6% of men are raped does not mean that the 18% figure drops to 4% as Phil in Utah wants to have it.

      For that particular year, it does seem as if the number of men who reported being forced to penetrate is about equal to the number of females who have been raped. However, that is a single data point and tells us nothing about how it looks overall. There may be certain days out of the year that the temperature in Sweden and Australia is the same. Does that mean that the two countries have identical climates? No. That means that it is better to look at longer time periods; that is why people care about life-time prevalence in the first place.

      For life-time prevalence, it is about 5.5 million men who have been made to penetrate and about 22 million women who have been raped (pp. 18-19). So even when we define both as rape, women carry the stronger burden.

      This is of course completely irrelevant to the point of my post, which was that the blogger Phil in Utah confuse prevalence with incidence.

    • My point is that your “1 in 71” quote is misleading. Your offhand comment of “Now, rape prevalence will differ depending on how inclusive the definition of rape is…” does not make it even clear or even suggest that if rape is properly defined, the prevalence is far higher than 1 in 71 (actually 1 in 16). Instead, readers are being led to think “oh, only 1 in 71 men have been raped,” which is inaccurate.

      Yes, I accept that Phil in Utah confused incidence and prevalence, just like most discussions of the CDC study, outside of MRA sites, have confused rape with being penetrated and ignored the “made to penetrate” numbers.

    • Egalitarian, this may be an example of ‘violent agreement’, where both you and Karlsson agree on the facts, but differ only in some secondary feature (in this case, emphasis on which part of the topic). So it may look like you disagree with Karlsson, but I don’t think you do, actually.

      It’s just that the prevalence or non-prevalence in males is *irrelevant* to the main point of Karlsson’s article. For you, it may be an important issue that you want to pursue, but for the purpose of this article, Karlsson’s quoting of the 1 in 71 is merely incidental (he could have left it out entirely and still made his entire point), and not part of his argument.

      I do happen to agree with you on one point, that by quoting it out of context, and not explaining the context thoroughly, some people *may* go on to use that as a basis to claim, “Oh, it’s only 1 in 71.” *But*, if they did do this, then they would be guilty of perhaps even worse fallacies: Cherry picking, quote mining, etc. Needless to say, Karlsson would debunk their misleading quote just as thoroughly as he’s debunking this current fallacy of misusing statistics. It would not be the fault of Emil Karlsson, but the fault of the quote-miner.

      However, as I myself try to practice not repeating false/misleading information, I could see that you would have a stylistic argument that it is ‘better’ either not to quote the 1 in 71 at all, or to only quote it in complete context. I just don’t see that as an ethical issue. I think Karlsson’s article is clear enough on its own (I was not mislead, myself), so I would understand if he felt no need to modify or amend it on this issue. (And I have been picky about misleading wording in articles before (recently, in fact), so I’m not merely dismissing your point. I just don’t think it ‘crosses the line’ in this instance.)

    • @Egalitarian,

      it should be obvious that these figures refer to rape as defined in the CDC report itself. Had I made a post that focused on the difference in rape statistics between men and women, I should have posted and discussed the definitions used. But this was not the point of the post.

      The concept of “rape as properly defined” is not uncontroversial (at what level of intoxication are people unable to give informed consent? Does subtle psychological manipulation count? etc.) and it is certainly not as easy as you make it out to be.

      Even using the most inclusive definition of rape possible from the CDC data (i.e. rape + other sexual violence):

      Women:

      Rape: 21,840,000
      Other Sexual Violence: 53,174,000

      = ~75 million victims

      Men:

      Rape: 1,581,000
      Other Sexual Violence (this includes being made to penetrate): 25,130,000

      = ~27 million victims

      This data is from life-time prevalence. For the past 12 months in 2010 cannot be reliably compared because the estimate for men where not included because of relative standard error > 30% or cell size < 20%.

      So in order to get the numbers to come out the way you want them to, you would be required to, with the figures already known, define rape in such a way that the figures come out the way you need them to. In other words, you would decide your own result.

    • Thaumas,

      You are right, I don’t disagree with Karlsson on the facts, but I would argue that quoting the “1 in 71” number for men is just as misleading as mistaking incidence with prevalence. Misleading statistics have the same impact whether they are due to factual errors, incorrect interpretations, or questionable definitions. I agree that it would have been reasonable to leave out the “1 in 71” number entirely.

    • If I had written

      Nearly 1 in 5 women (18.3%) […] in the United States have been raped at some time in their lives, including completed forced penetration, attempted forced penetration, or alcohol/drug facilitated completed penetration.

      …then I am sure someone would have accused me of trying to cover up the rape of men or minimizing the rape of men by quoting selectively.

      I guess there are some cases where you just can’t win.

    • I think you’re right, there, Emil. You can’t please everyone. The main thing is that you quoted it correctly (though perhaps not perfectly; and perfection is a rotten standard to live by anyway), and provided a link for people to check on their own. That’s completely fair, in my personal opinion. You didn’t actively misrepresent anything, and you’re aware of the subtleties of the reports and the statistics, and even gave notice that the numbers depend on the definitions used.

      @Egalitarian, I’m not unsympathetic to your desire not to mislead in this way, but I respectfully disagree that Karlsson’s quoting is “just as misleading as mistaking incidence with prevalence”, due to the fact that the quoting was a) incidental, not actually relevant to his point, and b) surrounded by enough warnings (e.g. that rates depend on definitions) and references (link to the original research) that an intellectually honest and responsible person would not reasonably use such an incidental quotation without first double-checking the source.

      That intellectually *dishonest* or careless people might misuse it that way is not the fault of Karlsson, but of such people themselves. And, as yet, such misuse of Karlsson’s quote is currently only hypothetical, and not actual; so again, I can’t say as he’s done anything wrong or unethical here. This is simply not on the same level as the misuse/misunderstanding of statistics regarding incidence and prevalence.

      The only occurrence of the 1 in 17 figure is *within* the quote itself, and Karlsson does not refer to it at all in his own words. He is not using it inappropriately, because he is not *using* it at all, except as incidental context of a quote about *another* statistic.

      “Misleading statistics have the same impact whether they are due to factual errors, incorrect interpretations, or questionable definitions.”

      I agree, but we should (IMO) focus our efforts on correcting actual misuses of such statistics, and on the people who actively use them inappropriately. I disagree that Karlsson’s quote here qualifies as an actual misuse of statistics, due to the reasons I gave above.

      And also, to repeat: If people wander on by and read random numbers from webpages and use that kind of unskeptical and uninformed method to base their worldviews upon, then *that* is the real problem — people thinking and acting irrationally and unreasonably. *That* is the problem I am focused on correcting. If people were better skeptics and critical thinkers (which is not actually that hard to do), then this entire side-issue you brought up would be an entirely moot point. Emil Karlsson’s article here is one example of an effort to help address that problem. As such, IMO it’s part of the solution, not part of the problem. We would all do well to think as clearly as possible on such contentious issues such as this one. That’s the point.

  • Very sorry to hear about your experiences, Mipochka. I hope more people feel brave enough to bring about such experiences to light. It is very important.

    Unfortunately, I think you may be burning a good bridge here, perhaps without realizing it. Just as it’s important to hear about the experiences of people who have been raped, it is also important to understand and use the science and statistics that have studied rape incidence and prevalence correctly.

    That is the job that Karlsson’s article above does so well, and I do not think I’m alone when I say that I found his analysis very helpful, which makes me more confident to speak out against the misuse of such statistics in the future.

    The reason this kind of analysis is important *in addition to* personal experience reports such as yours, is that one of the best ways to defuse an ongoing argument with people who are entrenched in their positions (such as some MRAs, and also some ‘radical’ feminists — but really any person entrenched in any position) is to appeal directly to the best evidence we have available about the objective reality of the macro-scale situation, such as by examining statistics about rape from national, international, or scientific studies.

    When people are entrenched, they go back and forth, “I’m right!” “No, I’m right!” with no way of resolving the conflict, *because* they do not rely on reliable evidence to come to their positions. It is only when people are forced to account for their positions by appealing directly to evidence that we can have a strong hope of resolving the conflict.

    The more people rely on their own personal convictions, the worse the disagreement gets. The more people rely on reliable, objective evidence of the actual *reality* of the situation, the more people will come to agreement *about* that reality.

    Karlsson’s article here (and his whole blog, really) is using this strategy to find out what the reliable primary sources of information (government studies, international studies, scientific studies, etc.) and to debunk the claims made by people who wrongly ignore and deny what’s *real* in favour of what they want to believe.

    In the end, reality always wins, so the side(s) that embrace reality are going to be proven right again and again. Karlsson is on the side of reality (and me too, or so I strive to be, anyway), so it’s unfortunate that you seem to be burning this bridge with him when he’s really on your side. I hope you reconsider, sometime.

  • Pingback: An Intellectual Re-evaluation of the “Schrödinger’s Rapist” Analogy « Debunking Denialism

  • I’ve read the article most of the discussion but come away not knowing what to believe! Can we get a summary of the best information available which is as fair as possible? Incidence of rape per year for male and female which uses the fairest definition(s) as possible is a solid start.

    • Before that is possible, we have to establish what definitions are fair.

    • I don’t mean that I personally know what definitions are most reasonable. It is just the first step in the process before you can look at incidence.

  • One important question would be, is the definition of rape by the CDC the same as those in the penal codes? Considering the US has many different penal codes, that would strike me as almost impossible.

    You say, “Obviously you can be subjected to a crime even though you are not aware that it is considered a crime”. True, but if a large part of the population disagrees with the law – which theorically, in a democracy, should be the expression of the will of the population – then who’s “wrong”, the law or the population?

    • I am not an expert on the American legal system, but I think rape is a state crime and so definitions could potentially vary across states.

      Even if a large part of the population disagrees with a law, the law is still active until it has been overturned. The USA is a constitutional republic, not a direct democracy.

  • Okay, I’ve asked my statistics lecturers about this since some people I know tend to get all snippy whenever you bring up numbers. [I use this terminology because it’s what happens when I start talking to law students. It’s really fun watching people pretend that your degree is useless because they can fuck up statistics all on their own.]

    In the CDC report, 79.2% of male rape victims, in the lifetime category, that were “made to penetrate” were victimized by a female.

    My lecturer says that it’s okay to take the 79.2% and multiply it by the year’s number for men who were “made to penetrate” since it would serve as the best possible estimate. [you know, population proportion applied to sample size]

    Would this be true?

    • Here is the relevant passage from the report with the 79.2% figure:

      For three of the other forms of sexual violence, a majority of male victims reported only female perpetrators: being made to penetrate (79.2%), sexual coercion (83.6%), and unwanted sexual contact (53.1%)

      So the question is then this: does it makes sense to use the proportion from lifetime prevalence data as an estimate for the corresponding proportion for incidence data?

      There are two major argument against this move:

      1. The average says nothing about the spread. The average proportion over a lifetime may be 79.2%, but for a given year (such as 2010), this proportion could potentially be substantially higher or lower. Do we know anything about the stability of this figure over time? If it is not, then this figure could be very misleading.

      2. It is a bit weird to say that an estimate is the “best possible estimate” if this is the only estimate we have for this dataset.

      The major argument in favor of this move is that if the 79.2% figure is the only estimate we have, it is the only estimate one can use. However, it is of vital importance to clearly mention the limitations outlined in (1) every time the 79.2% figure is used in this way.

    • Thank you for your reply. 🙂

      I think he meant that unless we actually had the real proportion, then the population proportion would be the best even if we had other options. [A confidence interval would be ideal, really, in this scenario.]

      While I agree that the listed problems should be mentioned whenever the number is used, I am wondering if you believe that the number would be higher or lower than the “lifetime average”? I’ve read a few arguments going either way.

    • I think he meant that unless we actually had the real proportion, then the population proportion would be the best even if we had other options. [A confidence interval would be ideal, really, in this scenario.]

      Well, that depends on what alternative we had. For instance, if we had data from 2009 or 2011, that may be a better estimation of the 2010 figure than the lifetime prevalence figure. This kind of figure would not be vulnerable for trends over longer periods of time, but obviously is still vulnerable to year-to-year variability.

      I am not convinced that the lifetime prevalence figure is the best estimate regardless of other non-2010 options.

      While I agree that the listed problems should be mentioned whenever the number is used, I am wondering if you believe that the number would be higher or lower than the “lifetime average”? I’ve read a few arguments going either way.

      I am going to have to answer that question with a clear “I don’t know”. I have not checked if there are incidence figures close to the year 2010 and I do not know anything about the stability of this proportion over time.

  • Ah, that makes sense. 🙂 Thank you for helping clear this up.

    Truth be told, I’m a little annoyed that the CDC didn’t disclose the proportion for 2011 in an otherwise extensive report. I personally would like more attention brought to male victims of female attackers, [to me the argument that “it distracts from female victims” is ridiculous since we’re looking at the same crime and there’s no reason why we can’t talk about both] but there are so few “acceptable” sources for it [I don’t mean “peer reviewed” – I mean “from a well-known organization” so that it’s not immediately dismissed as “probably not peer reviewed” or “old” or “limited” without the people I’m arguing with actually reading the source. [sigh]

    P.S. Your site is very thorough and it’s great to see someone actually take statistics seriously.

    • Sexual violence against men has often received considerably less attention and male victims are often given less support. This is probably due to a combination of factors, such as:

      – flawed cultural expectations e. g. “men always want sex” or “men who get raped are weak and cannot defend themselves” etc.
      – flawed understandings of male physiology e. g. men allegedly cannot get raped because erections are thought to be voluntary and/or only occurring during consensual love-making.
      – historical context e. g. rape is considered by many to be a considerably larger problem for women than men throughout history.

      The situation is complicated.

      On one side, we have some non-feminists who do use the “what about men?!” approach as criticisms of feminism or to distract from / minimize the issue of rape against women.

      On the other side, we have some feminists who reflexively interpret any mention of male victimization as an anti-feminist trope and starts throwing out accusations of misogyny.

      I have stopped trying to argue with any of these two groups. The best advice I can give is to do the same.

      More generally, the reason that I started a blog was that I was tired of trying to “debate” pseudoscientific cranks on blogs, forums and comment sections (endlessly long struggles on other people’s turfs). This has made be less frustrated and better at choosing my battles. When I see something I disagree with, I can just write a detailed and reference blog post about it. It has made me more productive and I often just ignore the most unproductive situations I come across (big relief!).

      Truth be told, I’m a little annoyed that the CDC didn’t disclose the proportion for 2011 in an otherwise extensive report.

      Have you tried sending them an email? Maybe they did not even collect that information for 2010?

  • I ended up emailing the CDC, and yeah they didn’t collect that information for 2010. I don’t really understand //why// they didn’t but that answers that question. I’ve sent them a few more information requests to clarify other things.

    I’m curious as to what you think of this page here: http://www.batteredmen.com/NISVS.htm. There is so much back-and-forth over the idea of battered husbands since that 1970s Strauss/Gelles report. Though, interestingly, while I found lots of critiques of the earlier reports I haven’t found any of the 1985 follow up (Societal Change and Change in Family Violence from 1975 to 1985 As Revealed by Two National Surveys). Not that I’m not //trying// to find critiques.

    • I ended up emailing the CDC, and yeah they didn’t collect that information for 2010. I don’t really understand //why// they didn’t but that answers that question.

      Alright, now we know that they do not even have that data to begin with. From my experience, researchers always fail to collect some data that in the end would have been interesting / useful. This is not necessarily due to malevolence, but more like “oh shit, we should have done X, Y, and Z as well”. This is, I think, what usually happens in a research project.

      There is so much back-and-forth over the idea of battered husbands since that 1970s Strauss/Gelles report.

      Most studies find that the proportion of men and women who are victims of intimate physical violence are comparable. Some people make a big deal about the fact that some studies even show that the proportion of men are slightly higher, but the survey response rate is often too low for such a result to be considered sufficiently valid. I usually just interpret such differences as comparable rates.

      I recently wrote a detailed analysis of two Swedish studies that examined intimate partner violence against men and women. Although the news media, MRAs and feminists all carried it wrong by cherry-picking figures or misunderstanding the statistics involved, I took a more level-headed approach.

  • Sorry to bother you again, but I found this explanation from the CDC http://wehuntedthemammoth.com/2013/10/29/cdc-mra-claims-that-40-of-rapists-are-women-are-based-on-bad-math-and-misuse-of-our-data/

    Is it just me, or is there comparison “example” overly simplistic? It seems to completely ignore the sample size they give. While we don’t know the variance or the yearly number, this example seems to be the metaphorical equivalent of laughing at a child asking a serious question.

    While the “40% of rapists in 2010” is obviously going to be incorrect [http://i.imgur.com/wd4XiOd.jpg], it seems… I don’t know, it seems rather wrong to claim that “made to penetrate” and “rape” aren’t the same thing.

    • Thanks for the link! It was very interesting to read the arguments made by the NISVS representative (I had not seen it before). In essence, we can summarize them like this:

      (1) The definitions of sexual violence used by the CDC can be defended because they were developed by expert researchers.

      (2). The proportion of female perpetrators for life-time prevalence cannot be used with 12-month incidence figures for males being forced to penetrate due to time-frame mismatch.

      (3). There is not a 1-to-1 relationship between perpetrators and victims as perpetrators could have multiple victims.

      (4). Data is not available for the number of total rape perpetrators and the number of female perpetrators for the created category (rape + forced to penetrate), as we only have victims (see 3).

      (5). The sample is constructed to be representative of victims, not perpetrators and certainly not perpetrators of a specific kind of sexual violence.

      (6). It neglects data cells were data were deemed to not be statistically reliable.

      Although I knew about the (3) issue and the subsequent problems that arise in (4), I did not think about them when writing my previous comment (where I just brought up (2)).

      I think it is reasonably well-known that most sexual violence is committed by a small proportion of perpetrators who have a great number of victims per perpetrator, so (3) makes a lot of sense. In fact, this precise error was committed by the feminists at The Enliven Project who constructed a rape infographic that went viral. Archfeminist Amanda Marcotte took them to task over it here.

      Obviously, not all of the problems brought up by the NISVS representative are equally destructive to the effort to find proportion of female perpetrators, but I think that (2)-(3) are sufficient to capsize the project of trying to find proportion of female perpetrators from data presented in the NISVS report.

      In the end, I think that

      – it is really important to always try to find out what definitions the different researchers use.

      – promoting awareness of male victims of sexual violence should move from trying to compare figures for men and figures for women to emphasizing (a.) the life-time prevalence of made-to-penetrate figure for men and (b.) the proportion of male victims from life-time prevalence figures who reported only female perpetrators. It can also be useful to talk about (c.) the 12-month incidence for made-to-penetrate regardless of the sex of the perpetrators. This three figures can constitute good points even without trying to find the proportion of perpetrators who are women.

      – the NISVS representative is probably right that we should start at the raw data level. Studies should be carried out on male victims of sexual violence and their perpetrators.

      – the feminist blogger makes a good point that male victims of sexual assault is a real issue and not given sufficient attention. It is indeed unfortunate that real issues are being obscured by bad (and often statistically problematic) arguments.

  • Ah… I swear I posted another comment here earlier today…

    • Anyway, it had said that I sent them an email and they didn’t have “reliable estimates” for the yearly number. I’m… not really sure what that means considering they have the numbers for who was assaulted.

    • It got caught in the spam filter. I fished it up and posted it. See above.

    • I think my newest comment also was eaten by the spam filter >_< I must apologize.

  • Re battered men:

    Your article is very even-handed and you don’t seem to be swaying to either ideological side. I would definitely agree that the issues you’ve outlined for each of those (that would actually extend to most of the domestic violence studies I’ve read) definitely need to be kept in mind when reading these studies.

    Shame journalistic integrity almost seems to be non-existent when it comes to issues that provoke emotional responses. Then again, the same thing happens in debating (I’m in a few debating clubs. Because clearly I have some sort of masochistic tendencies.)

    Re: CDC

    Comment 1: Yeah, I didn’t think they’d omit it maliciously. I got to conduct a survey for one of my units last semester and there are a lot of “Dammit, should have asked about that” moments in the later stages of analysis. I guess I was just holding them to a higher standard since they’re a “big organization” even though, realistically, this area of study still has a learning curve in play.

    Comment 2: Ah, thank you – you’ve made it a lot clearer.

    Re (3) & (4): Since it would be incredibly difficult to actually ascertain whether it was different females committing these attacks, if the CDC was to release perpetrator data in the future it wouldn’t be “X% of rapists are women”, it would be “X% of male victims made to penetrate this year had female assailants”? Admittedly a less “sexy” title but it would be more accurate if we had the information.

    I was actually incredibly excited when this study actually drew attention to male victims and it’s sad to see that rather than this being used as evidence that more help should be given, there are a lot of people using it to promote anti-feminist agendas or, in some cases I’ve noticed, using the poor work of that group of people to ignore the issue entirely.

    I actually have a question with regards to the lifetime vs yearly numbers: which do you believe is a more accurate measure to use? While the lifetime numbers certainly give us a picture of what has happened previously (i.e. that women are definitely the grand majority of rape/sexual assault victims), but it’s also been argued (Murray Strauss 2005) that the more recent yearly results may be a better indicator of recent and immediate-future happenings.

    • . I guess I was just holding them to a higher standard since they’re a “big organization” even though, realistically, this area of study still has a learning curve in play.

      It is probably worse for bigger organizations: a lot more people involved, a lot more people to argue and compromise with and so on.

      Since it would be incredibly difficult to actually ascertain whether it was different females committing these attacks […]

      Well, a perpetrator-focused study could probably shed some light on it. Ask a national representative sample of women if they have ever made a man penetrate them when he did not want to (or under certain circumstances such as violence, threat of violence, alcohol or drug-facilitation etc.) without using trigger words like rape or assault and if so, how many men / how many times.

      , if the CDC was to release perpetrator data in the future it wouldn’t be “X% of rapists are women”, it would be “X% of male victims made to penetrate this year had female assailants”? Admittedly a less “sexy” title but it would be more accurate if we had the information.

      Maybe something like that provided they can fix problem (5). Otherwise it would be more like “X% of male victims in this sample made to penetrate…” Overall, I think these kinds of questions are better answered by perpetrator-focused research (see above).

      I actually have a question with regards to the lifetime vs yearly numbers: which do you believe is a more accurate measure to use?

      I the end, I think it depends on what kind of questions you want to examine. In general, I do not find life-time prevalence or 2010 incidence data to be useful when trying to get a grip on the current or future situation(s). Life-time prevalence figures could be influenced by the past too much and 2010 incidence data could be too sensitive to year-to-year fluctuations.

      Obviously we would want to do this kind of incidence research every year or every two years, but there are probably not enough resources available.

Comments are closed.

Discover more from Debunking Denialism

Subscribe now to keep reading and get access to the full archive.

Continue reading

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!