How ISIS’ language changed over time: more concern with females and more “net-speak”

We (myself and Ana-Maria Bliuc) just published a brief research-paper in the Italian magazine “Security, Terrorism and Society“. We used the computerized text analysis program LIWC (Linguistic Inquiry and Word Count) to investigate the evolution of the language across the first 11 Issues of Dabiq.

Our paper shows ISIS’ increasing concern with females. This is especially important because it shows that ISIS needs to attract not only fighters but also women in order to create a society that is not only composed by warriors but also by families, where people can live an “ordinary” life.  This is a cornerstone of ISIS “utopia”, which is a powerful radicalization motive. The next figure shows the increased concern with females in ISIS language.

Female

Additionally, our analysis shows that ISIS increased its use of internet jargon (for example abbreviations like “btw”, “lol”, thx”). We believe that this suggests that ISIS complies with the requirements of the internet environment, and aims to connect with the identities of young individuals. The next figure shows the increase in “net-speak” in ISIS language.

netspeak

We believe that the analysis of ISIS language with LIWC categories is particularly interesting because it offers insights about the motives, emotions and concerns of the terrorist group. Research in the field of psychology of political leadership showed that the success of a leader depends on a match between the personal characteristics, the historical context and the followers’ psychological characteristics. The psychological structures of a text can generate identification in the audiences that recognize themselves in such structures and motives: the fact that ISIS is more concerned about females, means for example that ISIS is trying to connect with females and with people concerned with females. The fact that ISIS uses more “net-speak” means that the groups wants to connect with people who use the same language.

This is just a descriptive research that we hope can generate discussion. More research is needed in this area: we (myself and Ana-Maria) conducted more studies on ISIS language that are under review and will (hopefully) appear soon on this blog.

ISIS threat makes Italian Catholics more supportive of right-wing politicians who are hostile against Muslims

Only a tiny minority of Muslims supports ISIS. Yet, people in Western society when perceive higher threat from ISIS tend to become more hostile against all Muslims.

We provided evidence to support this proposition with an experiment conducted among Italian Catholics.

In an article just published in the Journal of Ethnic and Migration Studies I investigated with Enrico Tacchi the effect of ISIS threat on Catholic Italian voters.

The results of the experiment suggested that the threat of ISIS activated religious identity in Catholic Italian voters, and increased support for right-wing politicians who expressed hostility against Muslims.

The following figure shows the support for a center-right politician who says: “In Italy there is no space for Mosques” (higher scores mean more support for the politician). The red bar shows scores for participants who, before rating their agreement with the statement, were asked to read a newspaper article about ISIS threat on the Vatican. The blue bar shows the scores for participants in the control group, who were asked to read a different article not related to terrorism (about the Scottish referendum). If you are more curious about the methods that we used, please have a look at the article.

 

BlogImage

 

We believe that this study provides an important piece of empirical evidence for understanding the effects of the wave of anxiety arising from the Islamic terrorist threat that has recently hit Europe and that is probably not going to dissipate anytime soon.

Another reason why it is hard ‘to get those results’ when we replicate a study: the flexibility-ambiguity problem

Today I found an excellent article that explains what is the flexibility-ambiguity problem and how we can solve it with simple requirements for authors and guidelines for reviewers: http://pss.sagepub.com/content/22/11/1359.short?rss=1&ssource=mfc

The core of the flexibility-ambiguity problem is what the researchers call: “researcher degrees of freedom”. As the authors explain:

In the course of collecting and analyzing data, researchers have many decisions to make: Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both?

The most frequent and costly error of exploring various alternatives to search for a combination that yields a significant p-value is a false positive.

I think that two important findings of this article are:

  • researchers who start collecting 10 observations per conditions and then test for significance after every new per-condition observation find a significant effect 22% of the time
  • it is wrong to think that if an effect is significant with a small effect size then it would be necessarily significant with a larger one
  • the false-positive rate if the researcher uses all of the common degrees of freedom is 61%: a researcher is more likely than not to falsely detect a significant effect by just using these four common researcher degrees of freedom (i.e. collecting multiple dependent variables, analyzing results while collecting data, controlling for covariates or interactions, dropping -or not – one of these three conditions)

As a solution of the flexibility-ambiguity problem the authors propose the following

The requirements for authors should be:
1. Authors must decide the rule for terminating data collection before data collection begins and report this rule in the article.
2. Authors must collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification.
3. Authors must list all variables collected in a study.
4. Authors must report all experimental conditions, including failed manipulations.
5. If observations are eliminated, authors must also report what the statistical results are if those observations are included.
6. If an analysis includes a covariate, authors must report the statistical results of the analysis without the covariate.

The guidelines for reviewers are:
1. Reviewers should ensure that authors follow the requirements.
2. Reviewers should be more tolerant of imperfections in results.
3. Reviewers should require authors to demonstrate that their results do not hinge on arbitrary analytic decisions.
4. If justifications of data collection or analysis are not compelling, reviewers should require the authors to conduct an exact replication.

The best quote from this article is: “Our goal as scientists is not to publish as many articles as we can, but to discover and disseminate truth.” (p.1365)

Of course the “truth” is a problematic notion, and critical thinkers would argue against the same existence and knowledgeability of the “truth”. Yet, I think that truth is (at least) honesty, and the exercise to publish as many articles as we can sometimes pushes scholars to be less honest.

The full reference of the article is: Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn (2011) “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”. Psychological Science 22(11) 1359-1366

A very good article about “The Extent and Consequences of P-Hacking in Science”

Yesterday I found this article about p-hacking in science. You find the article here: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106

Just let me summarize here two premises of the article.

What is p-hacking?

P-hacking (also known as “inflation bias”, “selective reporting” … and “cherry-picking”) is the misreporting of true effect sizes. It occurs when researchers:

  • conduct analyses midway through experiments to decide whether to continue collecting data,
  • record many response variables and decide which to report postanalysis,
  • decide whether to include or drop outliers postanalyses,
  • exclude, combine, or split treatment groups postanalysis,
  • include or exclude covariates postanalysis,
  • stop data exploration if an analysis yields a significant p-value.

Why should we care about p-hacking?

Meta-analyses are compromised if the studies being synthesized do not reflect the true distribution of effect sizes … and meta-analyses guide the application of medical treatments and policy decisions, and influence future research directions.

Answers may be: “Let him who is without sin cast the first stone” or “Most of us did some form of p-hacking because the system encourages us to do it”. However … we really need to find a way out.

This for example seems a good idea: https://royalsociety.org/news/2015/05/royal-society-open-science-to-tackle-publication-bias/