Facebook Inc. has begun rating users on their trustworthiness as it attempts to battle misinformation posts on the social media site, the Washington Post reports.

Citing an interview with Facebook product manager Tessa Lyons, the Post said Facebook created the rating system in the last year.

Lyons job includes identifying malicious actors on the site.

The scores rate users from zero to 1. Signals used to establish the rating include how users interact with articles posted on the site. If they flag an article as false and it’s later confirmed by a Facebook fact-checker, they might weight that user’s feedback higher than others, Lyons told the Post in an email.

Lyons said if people only reported things actually false instead of just things they disagreed with, it would make the job much easier.

Many social media sites, including Twitter, have been policing their systems more aggressively since it became apparent Russian hackers and hate spewing activists have been violating their terms of service.

The Post article notes that “false reporting has become a tactic in far-right online harassment campaigns.”