How Are Behavioral Attributes Assessed in an Open Source Search?
How Are Behavioral Attributes Assessed in an Open Source Search?
An open source search involves reviewing publicly available information to determine whether any content may be relevant to the hiring process.
For this type of check, Manymore uses an analytical tool powered by machine learning to identify specific types of behaviors in text and images published on open social media platforms.
The goal is not to evaluate personality or opinions, but to detect potential public behavior that could relate to reputation, integrity, or professional suitability.
How the Analysis Works
The tool retrieves publicly available posts and images from the candidate’s open profiles.
Each post and image is analyzed for 13 distinct behavioral attributes and a customized set of keywords (up to 500 per profile).
If a post exceeds a certain confidence threshold for at least one behavioral attribute, it is automatically flagged for manual review.
For example, a post might have 65% confidence for Disparaging language and 73% confidence for Prejudice. In this case, the flag would indicate Prejudice as the primary behavioral attribute.
It is also possible for both a post and its associated image to be flagged simultaneously. When this happens, the flag will show both the reason for the post text and for the image that triggered one of the behavioral attributes.
All findings are manually reviewed by a Manymore analyst before being included in the report.
Behavioral Attributes
Below is the list of behavioral attributes used in the assessment framework, based on Ferretly’s definitions:
Behavior | Definition |
|---|---|
Disparaging | Name calling, offensive, or derogatory statements toward an individual about their personal attributes such as weight, height, looks, or intelligence. |
Drug/Alcohol Mention | Statements related to drugs and/or drug use including slang, street names, and related phrases. |
Drug Image | Images of pills, syringes, paraphernalia; may include smoking, drinking, or injections. |
Gory Image | Images showing blood, injury, disfigurement, corpses, crime scenes, or violence. |
Nudity Image | Explicit or non-explicit nudity, adult or pornographic content, or partially exposed body parts. |
Politics/Government | Statements relating to politics or government affairs, such as politicians, policies, or social issues (e.g., immigration, environmental policy). |
Prejudice | Derogatory, abusive, or threatening statements toward a group of people based on race, religion, or sexual orientation. |
Profanity | Obscene language, cursing, swearing, or generally crude or vulgar words and phrases. |
Rude Gestures/Symbols | Visual depictions of rude gestures (e.g., middle finger), Nazi symbols, or extremist/terrorist flags. |
Self-Harm | Mentions or indications of wanting to harm oneself or commit suicide, or references to suicidal behavior in others. |
Suggestive | Expressions or references that could be perceived as sexually demeaning or harassing. |
Threats | Expressions of intent to inflict harm or loss of another person’s life. |
Weapons Image | Images showing firearms, explosives, sharp weapons, or ammunition. |
Keywords | Flags posts based on matches to customized keywords defined by the customer. |
Handling of Findings
All flagged posts and images are manually reviewed by a qualified Manymore analyst before reporting.
The manual review takes context, time, language, and tone (including humour or sarcasm) into account to avoid misinterpretation.
The report describes any findings neutrally and factually, without interpretation. For example:
“Post from a public profile dated 2022 indicates use of profanity in a public context.”
It is always the employer’s responsibility to decide whether the content is relevant to the role or recruitment process.
Legal and Ethical Considerations
- The analysis includes only publicly available information.
- The tool uses machine learning, but all decisions are made by humans.
- Assessments must never be based on political beliefs, religion, health, or private life.
- The candidate has the right to access and comment if findings are recorded.
- Categories such as Self-Harm and Politics/Government are reviewed with particular caution to avoid violations of privacy or discrimination laws.
Employers should also consider relevant local regulations when deciding which behavioral attributes to include in their reports.
Ensuring Accuracy and Fairness
Machine learning is a support tool, not evidence.
Therefore, human quality control is always performed before a finding is included in the report.
This ensures fairness, relevance, and proportionality in line with GDPR and employment law requirements for lawful and objective processing.
Updated on: 11/11/2025
Thank you!