The data available in the Safeguarding Full Report can feel pretty overwhelming sometimes. This article aims to provide extra guidance for Safeguarding officers to make informed decisions when investigating a Safeguarding breach.
Smoothwall's Safeguarding feature is designed to detect intentional user activity which is likely to indicate a cause for concern. To do this, Safeguarding has a number of "rulesets" which define how web content is classified.
Safeguarding looks for results which match (and do not match) a series of Guardian categories then removes those entries which don't really reflect user intent, and ultimately just pads out the Full report. Each Guardian category is assigned a threat level of either Danger, Warning, or Advisory, depending on the ruleset in question. The intent here is allow, say, rulesets such as Bullying give less weight to content categorized as Intolerance than the Radicalisation ruleset, while still maintaining an interest.
|Radicalisation||Terrorism||Education and Reference|
|Abuse||Child abuse||Education and Reference|
|Suicide||Self harm||Education and Reference|
|Substance Abuse||Drugs||Education and Reference|
|Bullying||Child abuse||Education and Reference|
|Criminal Activity||Personal weapons||Education and Reference|
|Criminal activity||Medical Information|
|Adult Content||Sexuality Sites||Education and Reference|
Note: Even if the above categories were allowed, blocked, or whitelisted in a Guardian policy, they would still cause a breach if an attempt was made to go to, or search for, a domain in one of those categories (excluding educational and medicinal ones of course).
What Does Not Trigger a Safeguarding Breach?
Because of the vast amount of data that is returned from a single website request, there is the risk that a Safeguarding breach may be triggered where there is no user intention. The following internet "paraphernalia" will not trigger a Safeguarding breach:
|||Images that are categorized as Web Search, Image Search, or Connect for Chromebooks|
Only the actual search terms entered by the user are used to determine whether a breach has occurred. The returned URLs that the user can choose from are not checked unless the user click through — it is unlikely the user is going to click through to every website. For example, searching for "
kittens" returns the URL for the Animal Liberation Front which hosts images of kittens, which could trigger a Safeguarding breach.
Many search engines try to preempt the search term being entered. This does actually fire off a search in the background, which will return results that do not reflect the user's intention. For example, entering "
sextant" into a search engine could trigger a Safeguarding breach for a "
sex" search, even though the user wanted information about nautical instruments.
JSON requests that are not related to a web search are not considered for a Safeguarding breach. JSON requests are typically made by web APIs and not directly by the user through a web browser.
Web requests made within one second of each other are considered page content or resources, and therefore do not trigger a breach.
Additionally, the following file types are discounted:
OK, So How Does All of that Prove a User's Intention?
Typing into a search engine proves a user's intent to search for those terms. A page returning a list of URLs as a search result does not show intent.
But clicking a URL, following a link, or browsing directly to a website demonstrates the user has changed their intention to using that page in some way.
The following websites provide further information:
- Which categories to Decrypt and Inspect for best Safeguarding results
- Why are my Safeguarding Alerts and Notifications going to SPAM?
- Why am I getting blank URLs in my Safeguarding reports?
12th January 2017