Facebook claims to have removed approximately 1,9 million pieces of extremist content linked to ISIS or Al-Qaeda (or, for a fraction of them, added a warning) in the first three months of 2018. This is roughly double the amount from the previous quarter. Of these nearly 2 million pieces of content, the "vast majority" was removed, while a small portion received a label, a warning stamp, but was not suspended because it was shared for informational purposes or to counter extremism. The social network announced this in a post on its company blog. Ninety-nine percent of the removed content, the post continues, was not identified through external reports but through the social network's internal mechanisms, namely its technology and staff. Facebook allegedly used automated software to recognize extremist material, especially images. The average time to remove new extremist content was less than a minute. Finally, for the first time, the social media platform clarifies its definition of terrorism: "Any non-governmental organization that engages in premeditated acts of violence against people or property to intimidate civilians, governments, or international organizations in order to achieve a political, religious, or ideological goal." This definition is agnostic to ideology or political purpose; the focus is on the use of violence, Facebook explains; and it does not include governments.
EDITORIAL TEAM






Choose the social channel you want to subscribe to