Ratings Algorithm Updates (RAU) is a common practice across rating industries that allow us to make necessary changes to the ratings algorithm since the cybersecurity landscape is ever-evolving. There are constant introductions of new technologies, such as cloud assets. Best practices often need to be updated based on market changes, like working from home.
See the set of changes for RAU 2021 that occurred on October 20, 2021 below. The changes are focused on improving the accuracy, explainability, and actionability of ratings.
Changes
- Risk Vectors Weight Changes
- Improved Ratings Change Explanations
- Employee Counts
- Changes in Compromised Systems
- Changes in TLS/SSL Configurations
- Changes in Open Ports
- Changes in Web Application Headers
- Changes in Desktop Software & Mobile Software
- Changes in Public Disclosures
Risk Vectors Weight Changes
To adapt the rating algorithm to the changing conditions in which it operates, risk vector weights are adjusted to better reflect the influence of each security domain in the overall posture of an organization.
💡 Our recommendation: To reduce the chance of a rating drop, check if there are improvements that can be made for the TLS/SSL Certificates and TLS/SSL Configurations risk vectors.
The following table shows the difference in risk vector weights before and after RAU 2021:
Risk Vector | Before RAU 2021 | After RAU 2021 | |
---|---|---|---|
Compromised Systems | 55% | 50% | |
Diligence | 35% | 40% | |
SPF Domains | 1% | 1% | |
DKIM Records | 1% | 1% | |
TLS/SSL Certificates | 5% | 7.5% | |
TLS/SSL Configurations | 5% | 10% | |
Open Ports | 13% | 10% | |
Web Application Headers | 3% | 3% | |
Patching Cadence | 2% | 2% | |
Insecure Systems | 1% | 1.5% | |
Server Software | 2% | 2% | |
Desktop Software | 1.5% | 1.5% | |
Mobile Software | 0.5% | 0.5% | |
User Behavior | 10% | 10% |
Improved Ratings Change Explanations
There were many instances in which 10-point changes in the security rating could not be explained by a corresponding change to any risk vector.
Our rounding method is now more robust so that any change in the rating can be traced back to at least one risk vector change and therefore making the security rating more explainable.
Employee Counts
Employee counts are used to normalize the data of specific risk vectors (Compromised Systems, File Sharing, Insecure Systems, Security Incidents) to avoid favoring or penalizing companies because of their relative size.
We’ve improved the algorithm for estimating employee counts and employee count estimation for Rating Trees. Whilst this change doesn’t have a significant impact in overall ratings, it improves the accuracy of Bitsight Security Ratings.
Changes in Compromised Systems
See How is the Compromised Systems Risk Category Calculated?
Reduced Multi-Day Impact of Infections
To improve disproportionate rating impacts across malware families, as not all families communicate with the same frequency (e.g., Conficker vs. Gamarue), the impact of multi-day infections have been reduced. If the event is persistent, only 1 event out of 3 days is counted. The First Seen and Last Seen dates are not affected.
This applies to all Compromised Systems risk vectors.
💡 Our recommendation: Even though the impact of multi-day events is removed, it is still important to identify and remediate any detected malware infections.
Changes in TLS/SSL Configurations
See How is the TLS/SSL Configurations Risk Vector Assessed?
Addressing Untrusted Certificates
Apple, Google, and Mozilla no longer trust certificates that were issued on or after September 1, 2020 and have a validity duration greater than 398 days.
Certificates issued on or after September 1, 2020 that have a validity period of more than 398 days are graded as WARN.
💡 Our recommendation: Check the validity period of certificates and make sure they have lifetimes of 398 days or less.
Properly Handling Multiple Obsolete Protocols
We previously tested for four obsolete protocols (SSLv2, SSLv3, TLS 1.0 and TLS 1.1), which are all nominally graded BAD. However, the RAU 2021 implementation is designed to limit the penalty. That means that if someone has all four obsolete protocols, only a subset is penalized.
The most appropriate way to grade obsolete protocols is to penalize each protocol; A server with 4 obsolete protocols have a larger impact on the rating, rather than one server with only 3 protocols.
💡 Our recommendation: Remove all obsolete protocols (SSLv2, SSLv3, TLS 1.0, TLS 1.1). Even if business constraints require the use of TLS 1.0 or TLS 1.1, removing SSLv2 and SSLv3 is highly recommended.
Improvement Method to Determining and Using the Weight of TLS/SSL Configurations Issues
We improved the algorithm used to determine the weight of each type of issue inside of a finding and to calculate the final finding grade. This leads to a more accurate finding grade calculation.
Avoiding the Penalty of Having Wildcard DNS Hostnames
There were instances where the presence of wildcards in DNS records meant an unnecessary magnification of the number of findings in TLS/SSL Configurations. We have improved the way wildcards in DNS records are handled to prevent the creation and evaluation of repeated findings.
Changes in Open Ports
See How is the Open Ports Risk Vector Assessed?
All Findings Related to RDP are Graded as BAD
Remote Desktop Protocol (RDP) is a known attack vector for ransomware and should not be visible outside of the firewall. To improve grading consistency by port type and across company size, all RDP findings are changed from WARN to BAD and have a larger rating impact.
💡 Our recommendation: Deactivate any instances of RDP exposed outside of the firewall, regardless of rating impact.
Rating Drops Due to a Single Open Port Finding are Limited
There were instances when rating drops due to an Open Port finding could drop up to 130 points. Such a big drop was observed especially in the rating of small organizations. We have reduced the frequency of such large drops due to single findings and limit the drop to a maximum of 80 points.
Changes in Web Application Headers
See How is the Web Application Headers Risk Vector Assessed?
NEUTRAL Findings Squash Non-NEUTRAL Findings
We changed the behavior in the evaluation of Web Application Headers findings where grade increases are not immediately experienced once the issues are fixed. It occurs when there are WARN or BAD grades that are fixed. The fix results in a NEUTRAL grade and is considered a new finding. The previous WARN or BAD finding remains until it completes its lifetime (60 days).
In the new implementation, the new NEUTRAL finding resulting from remediation actions squashes the previous BAD or WARN finding.
Wildcard DNS Hostname Penalties are Avoided
There were instances where the presence of wildcards in DNS records meant an unnecessary magnification of the number of Web Application Header findings. We have improved wildcards in DNS records handling to prevent the creation and evaluation of repeated findings.
Perfect Headers are Required for GOOD Findings
We have changed our approach to evaluating Web Application Headers. Only findings with perfect headers are graded as GOOD. Going forward, the findings with imperfect headers that would have previously been graded as GOOD are now graded as FAIR.
Changes in Desktop Software & Mobile Software
See how the Desktop Software and Mobile Software risk vectors are assessed.
Dynamic N/A Grading
To ensure fairness in grading cases of low finding visibility, the Desktop Software and Mobile Software risk vectors are graded as N/A when finding visibility is low and have no impact to the rating.
The Desktop Software and Mobile Software risk vectors are graded as N/A when at least one of the following conditions is observed:
- Total Estimated Users is less than 5
- Estimated Users is less than 100 and Estimated Users is less than Employee Count/1000
There might be situations where the risk vector is graded as N/A when it was previously graded between A and F.
Changes in Public Disclosures
See How is the Public Disclosures Risk Category Calculated?
Improvements in Assessing Breach Impact
To improve Breach Security Incidents and General Security Incidents grading and make it more consistent with other risk vectors, we’ve implemented a data-driven approach to assessing the impacts of a security incident based on incident type and the predictability of future security incidents.