Large organizations typically have more domains, more machines, and a greater network presence than smaller ones. As a result, they generally have more Compromised Systems, User Behavior, and Diligence findings. Risk vector grades are normalized based on an organization's size to ensure ratings are fairly calculated for large companies and avoid favoring or penalizing companies because of their relative size.
- We have a proprietary algorithm for estimating employee counts and employee count estimation for Ratings Trees that are accurate towards Bitsight Security Ratings.
- Each risk vector is normalized using a specific method dependent on the associated risk and assigned a letter grade.
- Each risk vector is assigned a weight as outlined in the risk categories and risk vectors overview.
These methods ensure the security rating of a large company is comparable to that of a small company and vice versa.
Normalization by Risk Type
Different methods are used to normalize the final result depending on the risk type. The selection of these methods is determined by the associated risk we are evaluating. For example, user behavior-related risk vectors take into consideration the count of employees, while risk vectors that evaluate the configuration of systems take into consideration the total number of findings we are able to generate.
Risk Category | Risk Vector | Method Used for Normalization |
---|---|---|
Compromised Systems | Botnet Infections | Employee Count |
Spam Propagation | Employee Count | |
Malware Servers | Employee Count | |
Unsolicited Communications | Employee Count | |
Potentially Exploited | Employee Count | |
Diligence | SPF Domains | Findings Count |
DKIM Records | Findings Count | |
TLS/SSL Certificates | Findings Count | |
TLS/SSL Configurations | Findings Count | |
Open Ports | Findings Count | |
Web Application Headers | Findings Count | |
Patching Cadence | Findings Count | |
Insecure Systems | Employee Count | |
Server Software | Active IP Count | |
Desktop Software | Estimated User Count | |
Mobile Software | Estimated User Count | |
DNSSEC | Findings Count | |
Mobile Application Security | Findings Count | |
Web Application Security | Findings Count | |
Domain Squatting | Not applicable | |
User Behavior | File Sharing | Employee Count |
Exposed Credentials | Not applicable | |
Public Disclosures | Security Incidents | Record Count & Company Size |
Other Disclosures | Not applicable |
Diligence Risk Vectors
Most Diligence risk vector findings are graded GOOD, FAIR, NEUTRAL, WARN, or BAD, with the exception of Patching Cadence and Domain Squatting. Insecure Systems findings are only graded NEUTRAL, WARN, or BAD; because of the nature of this risk vector, findings are never GOOD or FAIR. With those exceptions in mind, Diligence grades can be considered the ratio of FAIR, WARN, and BAD records to the total number of records associated with an organization. A larger organization will usually have more findings, and any given finding will have less impact than it would for a smaller organization.
Findings in a given grade may have different scoring impacts due to their estimated severity. To build a risk vector grade, we add the scoring impacts of all findings and divide them by the normalization factor to produce a raw score. To determine the risk vector’s letter grade (A-F), we convert the raw score to a percentile by ranking all the organizations we rate across all industries and locations.
Some organizations are excluded from the ranking process. These include cloud service providers and telecommunications companies, whose ratings are typically low because of customer-hosted assets that are not controlled by the organizations that own the IP address space.
Special cases are noted below.
Desktop Software and Mobile Software
Since these risk vectors track the operating systems and browser versions of outbound web traffic, normalization is based on the estimate of the number of users we can observe within a company's infrastructure. This value takes into account the different traffic that is generated from each IP and is grouped by user agent, target domains, and a session identifier that allows us to calculate the approximate number of different users within that infrastructure.
Server Software
Normalization is based on the number of unique IP addresses with exposed services, such as HTTP[S], SMTP, or SSH. This is derived from the data available on the Open Ports risk vector.
Compromised Systems Risk Vectors, File Sharing, and Insecure Systems
The Compromised Systems risk category tracks malware infections on internal endpoints by intercepting traffic to the malware's command and control (C2) infrastructure; the File Sharing risk vector tracks BitTorrent activity from a company; the Insecure Systems risk vector assesses endpoints that are communicating with an unintended destination. All inform Bitsight about the abuse of endpoints and are, therefore, based on company size (employee count). Each risk vector uses this metric to normalize the final result assigned to them.
If the employee count for an organization is unknown, the employee count defaults to 100.
Security Incidents
The size of an organization (measured by the number of employees) factors into the impact calculation on a logarithmic basis. Employee count is restricted to 100 employees at the lower end and 100,000 at the upper end to account for the sparsity of data.
Learn more about the impact of Security Incidents, including details on the impact of breaches and general security incidents, and also adjustments based on record counts, company size, and recording delays.
Adjusted Peer Analytics Data Counts
For the Risk Vector Details data in Peer Analytics, the displayed finding counts are adjusted to match the size of your organization. This adjustment results in more meaningful comparisons and ensures the displayed reference values are useful for guidance in defining your security performance goals.
- Compromised Systems: We adjust for company size (employee count).
- Diligence: We adjust for either the IP count for the Server Software risk vector or record count for all other Diligence risk vectors.
- File Sharing: We adjust for company size (employee count).
Example:
If your company has 10 findings in total with 2 BAD findings, a peer with 100 findings in total with 20 BAD findings is similar. The peer's BAD finding count is adjusted to “2,” i.e. there are 2 BAD findings per 10 total findings.
- October 23, 2023: Updated to reflect normalization process for all risk types.
- October 22, 2021: Updated default if employee count is unknown, “1000” changed to “100.”
- November 16, 2020: Added Security Incidents.
Feedback
0 comments
Please sign in to leave a comment.