To ensure we provide the most relevant and comprehensive ratings on cyber security performance; we are committed to continuously expanding the data quality, breadth and innovation used in security ratings. We will continue to add breadth to our data sources and risk vectors to continuously expand the visibility of a company’s performance.
We do this by owning proprietary data streams and working closely with partners around the globe to ensure access to multiple and diverse data feeds. We do rigorous analysis on the quality, origin, and confidence of all collected data. Because of the breadth, we can cross-correlate and improve confidence based on multiple observation points and methods.
In addition, we can provide historical data going back 1 year, giving organizations a long-term view of security performance across the enterprise.
Learn more about the types of collected data.
Data Sources
Data sources undergo a rigorous business and technical evaluation process.
We begin with evaluating dozens of candidates to narrow down the list to one or two over the course of 3 to 6 months. The data is tested to ensure it provides wide coverage, is delivered reliably, and provides satisfactory detail to prove its own validity.
If the data source is being introduced as a new risk vector, we release it as a beta for up to one year before including it in a ratings algorithm update (RAU). The “soak time” in the customer’s hands enables us to identify and resolve the vast majority of corner cases that could result in inaccurate rating changes.
Once new sources are released into the ratings algorithm, our support, research, data science, and engineering teams all maintain their involvement in resolving the remaining corner cases that we discover via internal monitoring and customer feedback.
Methods
Our data sources are carefully curated for their detective breadth and technical reliability due to the daily volume of processed data. Once the legitimacy of a source is qualified, our inclusion criteria is straightforward and the event is included in your organization’s rating.
Method | Description |
---|---|
Sinkhole | This is a technique that intercepts botnet traffic on its way to a command and control server (C&C or C2 server). By intercepting the botnet traffic, the sinkhole can get infection information and details on its origin. |
Spam Trap | A threat intelligence group planted email addresses, which no one should ever use or know about, to trap Spammers. These spam traps are placed where Spammers look for address lists. The collected address lists receive malicious messages. These events are attributed to your company’s rating based on the sender IP/domain information contained in the message. |
Email Header Analysis | Analysis has been done on a message or set of messages that hit a spam trap. The header is determined to have shared properties with similar messages that indicate malicious intent. Some spambots are detected and characterized via email header analysis alone, which is not evidence of an outreach to a C&C server, but they do fit specific patterns that allow us to identify them with specific names (i.e. Lethic). |
Mail Server Connection Analysis |
Spam type: Impossible HELO An Impossible HELO is a characteristic of a spambot detected during the establishment of a session with an external mail server. Upon the completion of a Transmission Control Protocol (TCP) handshake, a legitimate email announces itself as coming from a valid domain. It will proceed to send email from the valid domain. For Impossible HELOs, the handshake is completed, but the email client announced itself as coming from an external mail server domain that’s known to be “impossible.” Example: These events are detected by a network of sensors that are deployed across the Internet through thousands of Mail Transfer Agents (MTA). These sensors are programmed with the known HELO strings of major mail providers and messages that are sent from infected clients with HELO strings that are known to be impossible. This is a simple check that is virtually impossible to fail if mail servers are configured correctly. |
Internet Scanning | This technique uses a headless browser for hostname-based scanning, gathering information about web servers and best practices as well as information about web applications being served from those web servers. This includes low-level information on network interactions, security issues reported by the browser, files loaded, etc. |
- February 28, 2024: Added internet scanning collection method.
- July 6, 2023: About data sources.
- January 16, 2020: Published.
Feedback
0 comments
Please sign in to leave a comment.