Disclaimer
The information in this document is subject to change without notice and describes only the product defined in the introduction of this documentation.
This documentation is intended for the use of Bitsight customers and partners only for the purposes of the agreement under which the document is submitted, and no part of it may be used, reproduced, modified or transmitted in any form or means without the prior written permission of Bitsight.
The documentation has been prepared to be used by professional personnel, and the customer assumes full responsibility when using it.
The information or statements given in this documentation concerning the suitability, capacity, or performance of the mentioned hardware or software products are given “as is” and all liability arising in connection with such hardware or software products shall be defined conclusively and finally in a separate agreement between Bitsight and the customer. Bitsight has made all reasonable efforts to ensure that the instructions contained in the document are adequate and free of material errors and omissions.
Bitsight will explain, if necessary, issues that may not be covered by the document.
Bitsight welcomes customer comments as part of the process of continuous development and improvement of the documentation.
Bitsight will correct errors in this documentation as soon as possible.
IN NO EVENT WILL BITSIGHT BE LIABLE FOR ERRORS IN THIS DOCUMENTATION OR FOR ANY DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, DIRECT, INDIRECT, INCIDENTAL OR CONSEQUENTIAL OR ANY LOSSES, SUCH AS BUT NOT LIMITED TO LOSS OF PROFIT, REVENUE, BUSINESS INTERRUPTION, BUSINESS OPPORTUNITY OR DATA,THAT MAY ARISE FROM THE USE OF THIS DOCUMENT OR THE INFORMATION IN IT.
This documentation and the product it describes are considered protected by copyrights and other intellectual property rights according to the applicable laws.
Bitsight brand and logo are trademarks of Bitsight Technologies.
Other product names mentioned in this document may be trademarks of their respective owners, and they are mentioned for identification purposes only.
Copyright © Bitsight Technologies 2019. All rights reserved.
Table of Contents
- 1 About this Guide
- 1.1 Audience
- 1.2 Conventions
- 1.3 Common Definitions
- 2 Overview
- 2.1 Infection Detection Real-time API
- 2.2 Events
- 2.3 Consumers
- 3 Events
- 3.1 Definition
- 3.2 Fields
- 4 Consumers
- 4.1 HTTP Example
- 4.2 Authentication
- 4.3 Limiting Data
- 4.4 WebSockets / EventSource
- 4.5 Field Filter
- 4.6 Data Filter
- 4.6.1 String Matchers
- 4.6.2 Asterisk Matchers
- 4.6.3 Number Matchers
- 4.6.4 Interval Matchers
- 4.6.5 CIDR Matchers
- 4.6.6 RegExp Matchers
- 4.6.7 Negative Matchers
- 4.7 Modules
- 4.7.1 Module Add
- 4.7.2 Module Group
- 4.7.3 Module Follow
- 4.7.4 Module KPI
- 4.7.5 Module Unique
- 4.7.6 Module URI
- 4.7.7 Module Geo
- 4.7.8 Module IPRep
- 4.7.9 Module CC
- 4.8 Multiple Filters
- 4.9 Query Updates
- 5Credentials
- 6Asynchronous API calls
- 6.1 API Ping
- 6.2 Modules API
- 6.2.1 API Geo
- 6.2.2 API IPRep
- 6.2.3 API CC
- 6.2.4 API URI
- 7 References and tables
- 7.1 References
- 7.2 Document Revision
About this Guide
This document intends to present a general overview of the Streaming Platform and how to use it to consume events. It includes detailed information and examples for protocols, formats, and advanced features.
This document includes detailed information on the supported protocols and formats, and notes about some additional optional enhancement features.
1.1 Audience
This document is intended for consumers of events from the API.
The audience must hold minimal knowledge of the HTTP protocol and the JSON format.
1.2 Conventions
The following conventions will be used:
- Bold: Graphical User interface elements like options, menus or buttons;
- Italic: references topics in this document.
- Courier: Paths, URL's, text to be typed by the User.
- <TERM>: Generic terms.
1.3 Common Definitions
The following definitions will be used:
Definition | Description |
---|---|
Boolean | The values are “true” or “false.” |
Consumer | A component that consumes events. |
Event | An individual data message, represented as a map of key and values. |
Event Field | The concatenation of Map Keys specifying a deep reference into an event value. |
Event Value | Any “null”, “boolean”, “number”, “string”, “list,“ or “map.” |
Field | A sequence of one or more concatenated Map keys representing a path deep into the structure. |
Float Number | A number with a decimal point, e.g. 1.1, -42.24. |
Hessian | Standard binary message format as defined by e.g. caucho.com and further explained on this document. Reference: http://hessian.caucho.com/doc/hessian-serialization.html |
Integer Number | A number without a decimal point, e.g. “1”, “-42”, up to unsigned long (64 bits). |
JSON | Standard textual message format as defined by http://json.org. |
List | A list of values. |
Long-Poll Mode | Connection mode that transfers up to one message, and stays connected up to a requested time limit (optionally forever), or until such message if available, when the Stream Platform then terminates the connection. |
Map | A list of key/value pairs. |
Map Key | The string key, always ASCII and lower case. |
Number | Either a real number up to unsigned 64 bits (long, integer, short), or a decimal number in IEEE format (double, float). |
Poll Mode | Connection mode that transfers up to one message, when the Stream Platform then terminates the connection. |
Producer | A component that produces events. |
Stream | Action of transferring events from Producers to Consumers, in real time, without storage of the messages. |
Stream Platform | End-to-end service for Producers generating events to be streamed to the Consumers. |
String | An UTF-8 string. |
Value | Any Null, Boolean, Integer, Float, String, List, or Map. |
Table 1-1 Most common definitions
2 Overview
2.1 Infection Detection Real-time API
The API is a platform that performs Realtime Streaming and Complex Event Processing, receiving messages from producer components, and generating intelligence and actionable information to consumer components.
The platform aims at a concept of realtime^3, providing realtime in three complementary ways:
- Messages flow from one end to the other with low-latency - consumers see messages that are happening “right now”
- Processing is dynamic, as consumers can construct a query with any filtering and processing combination needed, against whatever data is available.
- Consumers can dynamically and continuously change the query to be performed, without the need to reconnect.
“The API is like seeing a football game where the viewer can, at any time, pick any camera, zoom and pan each camera as it likes, and ask at any time the platform to measure something about the game – the amount of time each team has the ball, the distance ran by each player, etc.”
2.2 Events
Events are generic messages as maps of keys and values, where a value may be a sub-tree of other maps or lists.
Each event is independent of the previous one, and there is neither guarantee of delivery nor guarantee of ordering.
2.3 Consumers
Most consumers connect to the API via standard HTTP, maintaining a long-lived connection that will receive multiple messages in sequence.
There are three HTTP standard mechanisms available:
- Regular HTTP in “HTTP chunked” mode, as specified in RFC2616. This mode can be easily used with command line tools like “curl”. By default, each event is separated by a LF (optionally CR+LF), as JSON ensures those characters are escaped and hence each event is guaranteed to be a single line.
- HTTP WebSockets, where each message is pushed individually according to the standard.
- HTTP EventSource, similar to WebSocket, but a more proxy friendly.
For applications, including command line scripts using “curl” or equivalent, regular HTTP is simpler, whilst for direct browser access it’s recommended to use EventSource whenever available, as EventSource is more HTTP-Proxy friendly than WebSockets.
The connection will be kept open until one endpoint decides to close it. The Stream API side will close it after a designated time without any messages, or if the Consumer requests, after a certain amount of bytes or time elapses.
When a Consumer connects, it specifies a query containing a series of parameters to specify the filters, augment modules or processing modules.
A single connection may specify more than one query, allowing receiving multiple event types without the need for multiple connections.
After a connection is made, it is possible to perform a parallel query to re-specify the query of the original connection, allowing receiving different event types without the need to disconnect and reconnect.
The simplest example to connect to the Stream API is just to run, on the command line, something like the following, replacing the server and port with the correct endpoint, and the key with the authentication key:
- $ curl -gsN 'http://<server>:<port>/stream?key=<key>&maxbytes=1'
This will connect to the given endpoint and retrieve a single message. Without the "maxbytes=1" it would receive an endless stream of messages that could fill up or even crash the terminal unless CTRL-C is quickly pressed.
The curl parameter “g” is to allow curl to pass as-is the characters “[]{}”, when needed, the "s" is to keep curl silent (no "download" statistics), and the "N" is to avoid buffering, so each message appears on screen as soon as it's received.
3 Events
3.1 Definition
All messages, independently of the protocol and programming language of the Consumer, can be interpreted as:
- Maps (key/values);
- Lists;
- Strings (always in UTF-8 bytes format);
- Numbers (long/integers/shorts – up to 64 bits - or doubles/floats for decimal numbers);
- Booleans;
- “null”;
For the sake of simplicity, the examples will be shown as JSON with "\r\n" (CR+LF) separators, albeit other equivalent protocols and event separator exists.
Each event is a whole map, with keys and values.
The streaming events are then a sequence of maps, each an individual message in its own line.
- {"a":"foo","b":123}\r\n{"c":"bar","d":456}\r\n
The map keys are ASCII string, in lower case.
Maps with keys and empty values are omitted from the message.
Lists may contain empty values e.g. "[null,null,1]" when necessary to maintain non-null values at the correct position. Null values at the end of the list may be omitted.
3.2 Fields
As a message is a map of key/value pairs, and the value can be any of the other value types, the message may be a tree composed of combinations of several levels of maps and lists. A field represents the deep path to reach a node of such tree.
- Map and list levels are separated by dots (“.”) or forward slashes (“/”) - For example, “a.b.c”.
- List items are represented by the hash character (“#”) followed by the list item position, starting at 1 – For example, "a.#3" is the third item from the list under "a".
For example:
{"a":{"b":{"c":true}}}
- The field “a” represents the node {"b":{"c":true}}
- The field “a.b” represents the node {"c":true}
{"a":{"b":[1,2,3]}}
- The field “a” represents the node {"b":[1, 2, 3]}
- The field “a.b” represents the node [1,2,3]
- The field “a.b.#2” represents the leaf 2
{"a":{"b":[1,{"c":true},3]}}
- The field “a.b.#2” represents the node {"c":true}
- The field “a.b.#2.c” represents the leaf true.
Each part of the field can be specified using asterisk to match prefixes, suffixes or middle strings.
For example:
{"foo":{"bar":{"xyz":123}}}
- The field “f*.*r.*y*” matches “foo” with “f*”, “bar” with “*r” and “xyz” with “*y*”.
{"foo":[{"abc":1,"def":2}]}
- The field “f*.*.*e*” matches “foo” with “f*”, any list item with “*”, and “def” with “*e*”.
The asterisk is not mandatory for incomplete text, meaning that “foo*”, “*foo” and “*foo*” still matches “foo”.
Asterisks should be avoided when the key is known, for performance reasons.
Additionally there is a special field combination “**” that will match multiple levels.
For example:
{"foo":{"bar":{"xyz":123},"ber":true}}
- The field “**” matches any leaf, namely “foo.bar.xyz” and “foo.ber.
- The field “**.xyz” matches the “foo.bar.xyz”.
- The field “**.*y*” matches “foo.bar.xyz”.
- The field “**.*oo” matches “foo”, even at the root level.
4 Consumers
Consumers receive events from the Stream API.
4.1 HTTP Example
- curl -sN 'http://<servername>:<port>/stream?key=<key>'
Becomes an HTTP request like:
GET /stream?key=<key> HTTP/1.1 User-Agent: curl/7.30.0 Host: <server>:<port> Accept: */*
The Stream Platform replies right away with a response like:
HTTP/1.1 200 OK Date: (…) Access-Control-Allow-Origin: * Set-Cookie: s=(…) Expires: Thu, 01 Jan 1970 00:00:00 GMT Transfer-Encoding: chunked 42C {"a":"foo","b":123}\r\n{"a":"foo","b":123}\r\n(…)
The Stream Platform will reply the 200 OK and all headers as soon as the request is accepted, but the first body chunk may occur only later when the first event becomes available.
4.2 Authentication
A key is required to access the Stream Platform.
This key can be sent over the query-string (GET), but should preferably go over a standard HTTP Form POST (using “Content-Type: application/x-www-form-urlencoded”).
- curl –sN –d key=<key> 'http://<servername>:<port>/stream'
Becomes an HTTP request like:
POST /stream HTTP/1.1 User-Agent: curl/7.30.0 Host: <server>:<port> Accept: */* Content-Length: 36 Content-Type: application/x-www-form-urlencoded key=<key>
The authentication key can also be sent as a HTTP header. The header is:
X-Stream-Key: <key>
- curl –sN -H 'X-Stream-Key: <key>' 'http://<servername>:<port>/stream'
Becomes an HTTP request like:
GET /stream HTTP/1.1 User-Agent: curl/7.30.0 Host: <server>:<port> Accept: */* X-Stream-Key: <key>
4.3 Limiting Data
Consumers can limit the consumed stream data by passing a parameter to limit the incoming messages either by time, or by bytes.
Passing the parameter “maxtime=xx” (in milliseconds), the Stream API will close the connection when a message is pushed into the Consumer after the requested time. If no messages are being pushed, the connection will not be closed at this given time, only later after the configured timeout elapses. As messages are processed in blocks, those queued messages will still be pushed and hence the real maxtime will always be slight bigger than the requested value.
Passing the parameter “maxbytes=xx”, the Stream will close the connection when a message is pushed into the Consumer after the total number of pushed bytes is larger than the given value. If no messages are being pushed, the connection will not be closed until the configured timeout elapses. As messages are processed in blocks, those queued messages will still be pushed and hence the real total number of bytes will always be slightly bigger than the requested value.
4.4 WebSockets / EventSource
Note: Whenever possible, use EventSource instead of WebSockets, as EventSource is more “HTTP Proxy friendly”
Consumers can consume data directly from the browser by either using the regular HTTP stream endpoint or via WebSockets or EventSource.
The regular HTTP stream endpoint (http://server:port/stream) is more reliable to use when there may be proxies in the middle that may not know the WebSocket protocol. The disadvantage is that the XMLHttpRequest browser API receives the data in real-time, but do accumulate the whole received data internally.
One recommended solution is to perform the query with a “maxbytes=” with a big enough value, but not too big that would crash the browser. This value also depends on the expected traffic, but experience has shown that a value between 1MB and 10MB is suitable even for high traffic connections, and proper browsers (Safari, Chrome, Firefox).
As the XMLHttpRequest will receive a stream of bytes, the consumer needs to split each message, by searching for the ‘\n’ character, cut the message, and keep the last ‘\n’ position for the next XMLHttpRequest callback when further data arrives.
The EventSource protocol (http://server:port/eventsource) and the WebSocket protocol (ws://server:port/websocket) are supposed to be more performant for the browser, as each messages are processed separately (no need to cut each message by ‘\n’ character), so the connection can be kept open as long as needed.
Depending on the configuration and version the endpoint path may not be relevant and both HTTP and WebSocket services be served by looking at the WebSocket headers and hence /stream or /websocket or other alias (/ws) all respond. To test, try if ws://server:port/stream works.
For the EventSource endpoint, use http://server:port/eventsource.
An example of an EventSource javascript:
var uri = "http://server:port/eventsource?key=<key>&s=<cookie>"; var es = new EventSource(uri); es.onppen = function(e) { console.log("open", e); } es.onerror = function(e) { console.log("error", e); } es.onclose = function(e) { console.log("close", e); } es.onmessage = function(e) { var json = JSON.parse(e.data); // do something }
An example of a WebSocket javascript:
var uri = "ws://server:port/websocket?key=<key>&s=<cookie>"; var ws = new WebSocket(uri); ws.onppen = function(e) { console.log("open", e); } ws.onerror = function(e) { console.log("error", e); } ws.onclose = function(e) { console.log("close", e); } ws.onmessage = function(e) { var json = JSON.parse(e.data); // do something }
As can be seen from the examples, EventSource and WebSockets can easily be interchanged just by changing the line “new EventSource/WebSocket” and the URL endpoint.
4.5 Field Filter
Content can be filtered to reduce the amount of data received by the consumer. It's equivalent to a "SELECT x,y,z FROM …" from SQL.
The parameter to use is "fields=", and each field shall be separated by commas.
Note: The HTTP query-string standard also allows passing multiple "fields=" parameters, as well as passing "fields[x]=" arrays.
Examples:
{"a":1,"b":{"foo": true,"bar":false},"c":[1,2,3]}
- “fields=*” means the whole message, which is redundant as it is the default;
- “fields=a.*” means the same as “a”, as the default is equivalent of always having “*” at the right side, returning "{"a":1}".
- “fields=b*.f*” matches “b.foo”, as “b*” matches “b” and “f*” matches “foo”, returning "{"b":{"foo":true}}".
- “fields=*b.*o” also matches “b.foo”, as “*b” matches “b” and “*o” matches “foo”;
- “fields=*b*.*o* also matches “b.foo”, as “*b*” matches “b” and “*o*” matches “foo”;
- “fields=**.bar” matches “b.bar”, as “**” matches any level that contains a node “bar”, albeit in this case it matches only one level.
Each field can be prefixed with the symbol “-“ (minus) to denote items to remove from the output. Example: "fields=-b" returns "{"a":1,"c":[1,2,3]}".
When multiple fields are requested, if no field is matched, the empty message is not returned. This means a list of fields behaves like an "or". Example: "fields=a,z" returns "{"a":1}", but "fields=x,y" return nothing.
The symbol “+” (plus) enforces a fields to be mandatory, which is relevant when multiple fields are requested. Example "fields=+a,z" returns "{"a":1}", but "fields=a,+z" returns nothing.
Note: When passing the "+" character through the query-string or POST, do not forget to escape it as %2b.
The fields on the list are processed sequentially, so the last matching rule is the one that wins. This is to allow rules like “-b.*, b.foo” to filter everything under “b” except “foo”.
4.6 Data Filter
Events can be filtered to receive only the ones that match a set of key/value filters. It’s the equivalent to a “SELECT … WHERE x=y…” of SQL.
For each filter item, a parameter should be passed containing the field as the parameter key, and the value to match.
Note: When passing through the query-string/post, it shall be prefixed with “f.” to avoid parameter name collisions. As each match is a separate query parameter, the query-string becomes like "f.x=y&f.z=t".
Prefix a field with the character "~" (tilde) to match a value only if the field does exist.
Prefix a field with the character "-" (minus) to match the absence of a field (the value is then irrelevant, and "*" (asterisk) shall be used).
Match a field with "*" (asterisk) to ensure the field does exist.
The format of the value being matched is described below. By default it's a case insensitive string match, or on the examples below, string matches with asterisk prefix and suffixes.
Examples:
{"a":1,"b":{"foo":"bar","bar":"foo"},"c":[1,2,3]}
- "f.a=1" → pass, as the key “a” has a value “1”;
- "f.a=2" → reject, as the key “a” has a value “1”, different from the requested “2”;
- "f.b.foo=bar" → pass, as the key “b” exists, the second level key “foo” exists, and its value is indeed “bar”
- "f.c=2" → pass, as the key “c” exists, and the value, albeit a list, does contain “2” in it.
- "f.a=1,2" = pass, as the key “a” has the value “1”;
- "f.a=1&f.b.foo=bar" → pass as both “a”=1 and b.foo=bar;
- "f.a=1&f.z=bar" = reject, as albeit “a”=1, "z" doesn't exist.
- "f.a=1&f.~z.foo=bar" → pass as “a”=1 and z and “foo”, being optional and not present, the test is skiped
- "f.a=1&f.~b.foo=bar" → pass, as the optional field “b” does exist, and does contain a “foo” with the requested value “bar”
- "f.a=1&f.~b.foo=foo" → reject, as the optional field “b” does exist, but the value does not match.
- "f.a=*&f.c=*" → pass, as both "a" and "c" fields do exist.
- "f.-z=*" → pass, as the "z" fields do not exist.
The field part respects the same rules as described above on the "Fields" section, namely the asterisks and separators. Example "f.**=foo" will pass any message where any value anywhere on the tree is "foo".
4.6.1 String Matchers
Values are matched by default as string, in case insensitive.
Booleans and numbers are also being matched automatically. "f.x=true" will match both a string "true" as well as a real Boolean true.
4.6.2 Asterisk Matchers
Strings can be matched by prefix, suffix or both, by using the "*" (asterisk) character.
- “f.x=foo*” → will match values starting with foo
- “f.x=*bar” → will match values ending with bar.
- “f.*abc*” → will match values containing "abc" anywhere, including itself.
4.6.3 Number Matchers
Values can be matches comparatively, numerically or string based.
To use this feature, prefix the filter with the character ‘<’ or ‘>’, followed optionally by the character ‘=’ for the “-or-equals” case.
Note: The query-string/post format is specified as "key=val&key2=val2", so recall the first "=" (equal) sign belongs to the query-string and not to the "-or-equals".
- “f.x=>10” → will only match numbers (including floating points) bigger than 10 (note: even if the value is a number written as a string (“10”))
- “f.x=>=10” → will match numbers greater or equal than 10
- “f.x=<10” → will match numbers smaller than 10 (including negatives)
- “f.x=<=10” → will match numbers smaller or equals to 10.
- “f.x=>foo” → will match strings bigger than “foo”, according to the ASCII and UTF-8 tables. E.g. of matches: goo (g>f), fop (p>o), fooo (4 > 3 letters). E.g. of non-matches: eoo (e<f), fon (n<o), fo (2 < 3 letters), fono (4 letters, but n<o).
4.6.4 Interval Matchers
Numbers and strings can be matched according to the same rules as the Number matchers, but within intervals.
To enable this feature, prefix the filter with the character ‘#’, followed optionally by the character ‘=’ for the “-or-equals” case, the first value, again the character ‘#’ with the optional ‘=’, and the second value.
Note: The query-string/post format is specified as "key=val&key2=val2", so recall the first "=" (equal) sign belongs to the query-string and not to the "-or-equals". Also recall the "#" (hash) is a reserved character and shall be escaped as %23.
- “f.x=#10#20” → will match greater than 10 and lesser than 20
- “f.x=#=10#=20” → will match greater or equals than 10 and lesser or equals than 20
4.6.5 CIDR Matchers
Values can be matched against IPs or CIDRs (sub-nets).
To enable this filter, prefix the filter with the ‘@’ character.
- “f.x=@127.0.0.1” → matches 127.0.0.1
- “f.x=@127.0.0.1/8” → matches any 127.0.0.0 to 127.255.255.255
Note: IPv6 matching will be available in a future version.
4.6.6 RegExp Matchers
Values can be filtered by simple regular expressions.
To enable this feature prefix the filter with the ‘.’ character. Like the string match, the match is performed case insensitive.
- “f.x=.[0-9]\..*” → matches "1.foobar".
- “f.x=..*MSIE.*" → matches “Mozilla/4.0 (MSIE) …)".
Note: The characters "[" and "]" may need to be escaped, for example with curl without the “g” parameters.
4.6.7 Negative Matchers
The matched result can be inverted by prefixing the matcher with the character "-" (minus) or "^" (caret). The caret alternative is necessary for example to distinguish between the value "-10" (minus 10)" or the value "^10" (10 negated).
The negative matcher can be applied to any of the other matchers, albeit some cases won't make much sense – better do a "f.x=>10" than a "f.x=-<=10".
4.7 Modules
Enabling some additional modules will augment or process the event data. Some modules may be enabled by default, whilst others can enabled only on demand by the Consumer, as the additional data may take considerable time to gather and hence should be applied only to a smaller subset of relevant content.
By default, each module will add to the event one or more key/value pairs, with the key composed of an underscore (“_”), the module prefix, plus the path of the augmented data, with dots replaced by underscores.
For example:
{"a":1,"b":{"foo":"1.2.3.4","bar":"foo"},"c":[1,2,3]}
- If a module matches the IP at the field path “b.foo”, there will be a new key “_module_b_foo” with the value corresponding to the module’s data.
When the field contains lists, the same rule of “#x” will reference the item matched. For example:
{"a":1,"b":["foo","1.2.3.4","bar"],"c":[1,2,3]}
- When the same module matches the IP in the middle of the list, there will be a new key “_module_a_b_#2” with the value corresponding to the module’s data.
To enable modules, just pass the parameter “modules=x,y,z”.
To disable an active-by-default module, pass the name of the module prefixed with a minus (“-“), e.g. “modules=x,-y,z”.
Some modules support optional additional features. To enable them, prepend the feature names to the module name, separated by “+”.
Note: Recall a “+” is a %2b if passed over the query-string). Example: "modules=foo+a+b,bar".
Each module specified below may or may not be available, and may or may not be active by default, or even be locked as active, depending on the configuration of the Stream API endpoint.
Please bear in mind that the order of the activated modules is relevant – each module applies to the result of the previous one. Also it's important to specify the correct order, performance wise, for example if one module adds data and the next one discards messages, it may be worth to reverse the order and first discard the messages, and then add the data only to the relevant messages that will not be discarded.
4.7.1 Module Add
- Module name: “add”
- Module response: as requested
- Additional Features: none
- Parameters: “a.<field>=value” (similar format as “f.<field>”
The Add module allows adding keys and values to the message where the value may be a fixed value, or a calculation.
Example: "…otherfilters…&modules=add&a.foo=bar" will return the event with a additional "{ …, “foo” : “bar” }"
This feature can be used for a consumer to pass back an opaque value, especially on the case of multiple filters (see below).
The value can also contain function in the format “{xxx}”. These blocks will be processed as defined below, and the result substituted. These functions will be executed from the innermost to the outermost, allowing applying function to the result of other functions. Example: "{xxx{yyy}}".
The order of the “a.xxx” is also relevant because a later “a.yyy” can access the value calculated on the previous phase.
Please note that at the moment the new field shall be unique and not collide with existing structures. For example, “a.x.y” will indeed create a structure "{...,"x":{"y":value}}", but if the "x" or "x.y" already exists, the resulting JSON may become malformed. It is recommended, for safety, that the new key is flat (only one level) and starts with “_” to mean “metadata”, and the value are guaranteed to be unique.
Existing functions {…}:
- "a.x={f.<field>}" of "a.x={f:<field>}" → will be replaced with the value for the given field, similarly to the "f.<field>" filter. At the moment the referenced field may not contain asterisks or regular expressions but only a literal key value. Example: "…&modules=add&a._foo=hello {f._origin} world" → "{…,“foo”:“hello test world”}"
- "a.x={f.<field>:start:end}" (also "f:"), or "{s:value:start:end}" → will calculate a substring of the given value (“s:”) or key value (“f:”). If the “:end” part is omitted, the value will be cut “start” characters from the left, or from the right if the value is negative. If the “end” is given, the value will be cut from the “start” position (inclusive) to the “end” position (exclusive). The combination of a positive start and a negative end is also possible to cut “start” characters from the left and “end” characters from the right side. E.g.:
- "a.x={f:_origin:2}" → "{…,"x":"st"}"
- "a.x={f:_origin:-2}" → "{…,"x":"te"}"
- "a.x={f:_origin:2:3}" → "{…,"x":"s"}"
- "a.x={f:_origin:1:-1}" → "{…,"x":"e"}"
- "a.x={uc:value}" → returns the upper case of the given value
- "a.x={lc:value}" → returns the lower case of the given value
- "a.x={md5:value}" → returns the MD5 of the given value (lower case)
- "a.x={r:value:search:replace}" → returns the result of the given regular expression applied to the given value. The search part may contain grouping, and the replace use $1..$n to use those groups
- "a.x={ts}" → returns the current unix timestamp in seconds
- "a.x={tsm}" → returns the current unix timestamp in milliseconds
- "a.x={map:<map name>:<map key>}" → returns the value for the given map name and map key
As these functions can be embedded into each other, it’s possible for example: "{md5:{lc:{f._origin}withsalt}}}".
Also the value is a string with functions, so concatenation is possible e.g. “foo {f._origin} bar {f._origin} other”
If the end result of the value looks like a number, the returned value will be represented as such, instead of a string wrapped number.
4.7.2 Module Group
- Module name: “group”
- Module response: “_group”
- Additional Features: none
- Additional Parameters:
- “group” → list of fields to define the unique key from the respective values, separated by comma
- “grouptime” (alias group_time), in milliseconds, default 60 seconds → the minimum time to hold back similar messages
The Group module allows de-duplicating multiple events with similar content, by unique key, and let the stream return a single entry for each requested time period, and as a bonus, include the count of similar occurrences during that period.
Example:
Assumption: multiple messages with a field of type, IP (quite variable), and for example the geo country of that IP, and only an approximate average per type and country are needed, but not the concrete IP information.
...&otherfilters…&modules=group&group=type,_geo_ip.country_code&grouptime=5000
This will return the first message, augmented with a structure like:
{...,"_group":{"count":1}}
Now for the given “grouptime” period, no more messages will be pushed into the consumer that matches the same group filter.
After the grouptime expires, but only when another message that matches that filter is available, it will be pushed into the consumer, now with more information:
{...,"_group":{"count":567,"elapsed":5050,"avg":112.27}}
This means that in the previous 5050ms (always bigger than the requested grouptime) there were 567 messages matching the group filter, which gives an approximate average of 112.27 messages/second.
Please bear in mind that the messages are only pushed when they are produced, irrespectively of the grouptime. For example if a message occurs only once every minute, and grouptime is set to 5 seconds, there will still be only one message per minute.
Please also bear in mind that the counter does not count the number of messages related to the given grouptime value, but to the returned “elapsed”. The average is calculated taking in consideration previous values, to soften the average and reduce unnatural spikes. For example, for a grouptime of 5 seconds, it is expected to see averages calculated over elapsed values of 7500ms.
The group field can contain asterisks and match multiple values. This feature is relevant for cases where there are multiple fields to look at, for example, multiple _geo* or _uri* nodes.
It is possible to filter out the first message by adding the filter "…&f._group.count=>1".
It is possible to use the group module to simulate throttling, by specifying a group key that is consistent, or even a key that does not exist. Example "…&modules=group&group=_foo_&grouptime=100" to throttle the incoming events to maximum 10 per second.
Albeit it is recommended to specify the "modules=group" especially to define the order if multiple modules are enabled, the presence of the "group=" parameter will enable the module automatically.
4.7.3 Module Follow
- Module name: “follow”
- Module response: “_follow”
- Additional Features: none
- Additional Parameters:
- “followname” → the name of the list to collect values
- “followtime” (alias follow_time), in milliseconds, default 60 seconds → the maximum time to keep values
- “followdump” (alias follow_dump), boolean, enables dumping the value when the filter matches the list
The follow module allows executing a query with a filter based on values collected from another query.
The feature is composed of two parts – the one collecting values, and the feature to filter by those values.
To collect values, a regular query shall be crafted to capture the events containing those values. It can also be a KPI query, which would allow capturing the most relevant values. From that query, we then append the parameters to specify to save them into a specific list.
Example:
{“a”: 1, “b”:{“foo”: “127.0.0.2”, “bar”: “foo”, "_cc_b_foo":{"ip":"127.0.0.2", "path":"b.foo", "type":"zeus"}
…&f.type=zeus&f.a=1&followname=zeus_ips&follow=b.foo
This will capture the values inside b.foo (127.0.0.1) into a list of unique values, with the name “zeus_ips”. The values from the list will expire after the given followtime period elapses. This list is kept in memory, and tied to the consumer session.
Then, to filter by these values:
…&f.ip=:zeus_ips
This will filter events where the field “ip” contains a value from within the current list of values on the “zeus_ips” list. This is a case insensitive match.
In case
The follow parameter accepts asterisks (e.g. if the value is a list of values).
4.7.4 Module KPI
- Module name: “kpi”
- Module response: “_kpi” (no original message)
- Additional Features: none
- Additional Parameters:
- “kpi” → list of fields to define the unique key from the respective values
- “kpitime” (alias kpi_time), in milliseconds, default 1 second → the period to push kpi messages into the consumer
- “kpiavg” (alias kpi_avg), in milliseconds, default 300 seconds → the period to calculate averages
- “kpilimit” (alias kpi_limit), default 10 → the maximum number of items to return.
- “kpisum” (alias kpi_sum) → default to count the number of messages, can be used to specify the numeric field value to sum.
- "kpiranges" → allow to define numeric ranges
The KPI module allows counting the values of similar messages and push to the consumer an aggregated and ordered list of average counters. The original messages are not pushed when this module is used.
Example:
Let’s assume multiple messages with a field of type, and for example the geo country of that IP, and we want to know the top combinations of those two values:
...&otherfilters&modules=kpi&kpi=type,_geo_ip.country_code
This will return a message with the following format, once per the second:
{"_kpi":[ {"key":"TOTAL", "entries":98, "elapsed":1013, "count":21701, "avg": 21422.51}, {"key":"foo,us”,"count":18442,"avg":18205.33,"pct":84.98}, … ]}
The "_kpi" returns a list of items with one up to "kpilimit"+1 items.
The first item of the list is the "Total", summing all data calculated up to this moment:
- key → constant string "TOTAL"
- entries → indicates the total number of counted items
- elapsed → indicates an elapsed time in milliseconds since the last kpi calculation (not necessarily the last message)
- count → indicates the counter of items since in the elapsed time
- avg → indicates "count" / "elapsed" (*1000), or items per second
The "entries" value can be used to detect if there are more items than the returned ones, for example, to show a pie chart with "others". By summing all "pct" values from the received items, and calculating the difference to 100, one knows the percentage of the "others" pie.
The next items on the list are zero up to "kpilimit" items:
- key → the value of the requested "kpi", separated by commas, with "?" or "Unknown" if the value did not exist
- count → the number of items for this key
- avg → same as above, but using this "count" value
- pct → percentage of this item ("this count" / "total count" * 100)
The KPI calculation, for performance reasons, is limited to a maximum number of items, and averages that tend to zero will also be removed from the list, so values like the number of entries may oscillate over a certain period.
As with the group module, the kpi message is only pushed into the consumer when a message matches the required filters and the "kpitime" has elapsed. For example, if the original message occurs only once per minute, the "_kpi" will also occur only once per minute.
The "kpiavg" allows specifying the average period of the counters in separate from the period of the kpi message. For the given default values, the counter and averages are calculated over a 300 second period (plus a proportional part of the past values, for softening the averages and avoiding spikes), but the current value is pushed every second.
The "kpisum" allows specifying an event field to be used as the increment value for the counter. By default (without the kpisum), each relevant event will increment the counter by one. By specifying the "kpisum" to a field with a numeric value, the counter will be incremented by that value. This can be used for example to count tops of bytes.
The "kpisum" can also be used to calculate averages of values per events instead of by time. By specifying "kpisum=avg:<field>", the result's "count" will become the sum of the values from the given field, divided by the amount of events matching the filter, averaged over the "kpiavg" time.
The "kpirange" allows specifying ranges of numbers, and calculating kpis of values within those ranges. For example, a "kpirange=5,30,60,300,1800,3600" will create ranges from "0-5", "6-30", "31-60", "61-300", "301-1800", "1801-3600" and "3601-infinite", and then look at the "kpi" value and match it within these ranges. This can be used for example to count tops of time ranges.
The "kpi" field can contain asterisks and match multiple values. This feature is relevant for cases where there are multiple fields to look at, for example, multiple _geo* or _uri* nodes.
A special "kpi" value "_" can be used to calculate the throughput of the give filter. It will set kpilimit=0 automatically (as there won't be keys named "_" anyway).
In certain cases of multiple queries it may be possible that one kpi calculation conflicts with another one – example the same kpi but with different "add" parameters. In this case as the kpi is calculated only once, only one message would be pushed into the consumer, instead of the expected two. It is possible to append "fake" keys to the "kpi" by prepending and appending the "_" (underscore) to the key. For example, "kpi=foo,_a1_" and "kpi=foo,_a2_" will calculate the "kpi=foo" twice (independently), but the "_a1_" and "_a2_" won't affect the response values.
Albeit it is recommended to specify the "modules=kpi" especially to define the order if multiple modules are enabled, the presence of the "kpi=" parameter will enable the module automatically.
4.7.5 Module Unique
- Module name: “unique”
- Module response: “_unique” (no original message)
- Additional Features: none
- Additional Parameters:
- “unique” → list of fields to define the unique key from the respective values
- "uniquecount" → the field value to count as unique
- “uniquetime” (alias unique_time), in milliseconds, default 1 second → the period to push unique messages into the consumer
- “uniqueavg” (alias unique_avg), in milliseconds, default 300 seconds → the period to calculate averages
- “uniquelimit” (alias unique_limit), default 10 → the maximum number of items to return.
- “uniquestop” (alias unique_stop), the field that if present and containing "stop", will remove the value from the list
The Unique module allows counting unique values from similar messages and push to the consumer an aggregated and ordered list of average counters. The original messages are not pushed when this module is used.
Example:
Let’s assume multiple messages with a field of type, and the IP, and we want to know how many unique IPs for each type:
...&otherfilters&modules=unique&unique=type&uniquecount=ip
This will return a message with the following format, once per the kpitime period:
{"_unique":[ {"key":"TOTAL", "entries":98, "count":21701}, {"key":“foo”, "count":18442, "pct":84.98}, … ]}
Like the KPI module, the result will be a list of items under the "_unique" key, with the first item stating the total, and the remaining items with zero up to uniquelimit items.
The first item, like the KPI, has a fixed key="TOTAL", and the total number of entries.
The count contains the number of unique values seen on the field "uniquecount". When a message contains a certain value, that value is stored. When another message contains the same value, the value is already stored, so the counter won't increment. If the value is not seen for "uniqueavg" time, the value is removed from the list and the counter decrements.
In case the "uniquestop" parameter is used, if a message contains this field, and the field value is "stop", the value for the "uniquecount" is removed imediately from the storage, and the counter decrements.
The Unique calculation, for performance reasons, is limited to a maximum number of items, and values that tend to zero will also be removed from the list, so values like the number of entries may oscillate over a certain period.
As with the group and kpi module, the unique message is only pushed into the consumer when a message matches the required filters and the "uniquetime" has elapsed. For example, if the original message occurs only once per minute, the "_unique" will also occur only once per minute.
The "unique" field can contain asterisks and match multiple values. This feature is relevant for cases where there are multiple fields to look at, for example, multiple _geo* or _uri* nodes.
In certain cases of multiple queries it may be possible that one kpi calculation conflicts with another one – example the same unique but with different "add" parameters. In this case as the unique is calculated only once, only one message would be pushed into the consumer, instead of the expected two. It is possible to append "fake" keys to the "unique" by prepending and appending the "_" (underscore) to the key. For example, "unique=foo,_a1_" and "unique=foo,_a2_" will calculate the "unique=foo" twice (independently), but the "_a1_" and "_a2_" won't affect the response values.
Albeit it is recommended to specify the "modules=unique" especially to define the order if multiple modules are enabled, the presence of the "unique=" and "uniquecount=" parameters will enable the module automatically.
4.7.6 Module URI
- Module name: “uri”
- Module response: “_uri_<normalized field>”
- Additional Features: “idn”, “uri”
- Additional Parameters:
- “urikeys” → list of fields look for URIs
The URI module recognizes URIs or hostnames, and augments the message with the details of those values.
By default the module will try to recognize URIs in every field of the original message. The "urikeys" parameter shall be used to specify exactly which fields to look at.
Currently the only validation being performed is ensuring the hostname part is either a valid TLD, or an IP. No validation is performed to check if the hostname is registered on the DNS. Internal domains may not be identified (e.g. "localhost", "localhost.localdomain", "*.lan", "*.local", etc.)
By default the module only recognizes values that contains the full URI. Enabling the feature "uri" will recognize values within other text, including multiple URIs on the same field.
Example:
{"a":1, "b":{"foo":"www.bitsight.com", "bar":"foo"}
becomes:
{"a":1, "b":{"foo":"www.bitsight.com", "bar":"foo", "_uri_b_foo":{"uri":"www.bitsight.com", "tld":"com", "host":"www.bitsight.com", "domain":"bitsight.com", "path":"b.foo"}
- The module recognises the value under "b.foo" as a URI
- The module appends a structure "_uri_b_foo"
- The “uri” is the whole URI value grabbed from the field path “b.foo”.
- The “path” is the original field path where the URI was grabbed.
- "tld" contains the TLD of the given hostname/uri, according to the public database at . E.g. “com”, “co.uk”, including IDN enabled TLDs.
- The host contains the hostname part of the URI, or the hostname itself.
- The optional "domain" contains the domain part of the hostname - the TLD plus the first component before.
Example with "modules=uri+idn+uri" and an URI and IDN hostname:
{"a":1, "b":{"foo": "http://اتصالات.امارات/business_ar.htm#123", "bar": "foo", "_uri_a_b_foo:{"tld":"..", "tld_idn":"xn--mgbaam7a8h", "host":"اتصالات.امارات", "host_idn":"xn--mgbaakc7dvf.xn--mgbaam7a8h", "uri":"http:\/\/اتصالات.امارات\/business_ar.html", "scheme":"http", "port":80, "uri_path":"\/business_ar.html", “fragment”:”123” , "path":"b.foo"}
- The “tld_idn”, “host_idn” and “fqdn_idn” contains the corresponding IDN, host and FQDN, if they are IDNs.
- The “uri” is the grabbed full URI.
- The “scheme”, “port”, “uri_path” and “fragment” are the separate components of the URI.
4.7.7 Module Geo
- Module name: “geo”
- Module response: “_geo_*”
- Additional Features: none
- Additional Parameters:
- “geokeys” → list of fields look for IPs
The Geo module augments certain values with geographic information.
Currently the Geo module applies to every value that is a valid IP (IPv4 or IPv6), and additionally to the geographic information, it also augments data relative to the ASN, when available.
Example:
{“a”: 1, “b”:{“foo”: “195.22.26.217”, “bar”: “foo”, "_geo_b_foo":{"ip":"195.22.26.217", "path":"b.foo", "age":411268, "country_code":"PT", "country_name":"Portugal", "region":14, "city":"Lisbon", "latitude":38.716705322265625, "longitude":-9.13330078125, "asn":8426, "asn_name":"ClaraNET LTD"}}
- The “ip” is the IP grabbed from the field path “b.foo”.
- The “path” is the path where the ip was grabbed.
- The "country_code" is the code from e.g. ISO 3166
- The "country_name" is the human readable name
- The "region" is the optional region from the country.
- The "city" is the name of the city
- The "latitude" and "longitude" are the approximate coordinates of the IP.
- The "asn" and "asn_name" are the ASN and human readable name of the owner of the IP.
4.7.8 Module IPRep
- Module name: “iprep”
- Module response: “_iprep_*”
- Additional Features: none
The IPRep module augments certain values with the AnubisNetwork’s MailSpike IP Reputation data.
Currently the IPRep module applies to every value that is a valid IPv4.
For performance reasons the IPRep data returned by the stream are, by default, not the live data provided by Bitsight, but a snapshot version, so there may be discrepancies and slightly outdated values.
Example:
{“a”: 1, “b”:{“foo”: “127.0.0.2”, “bar”: “foo”, "_iprep_b_foo":{"ip":"127.0.0.2", "path":"b.foo", "age":42, "zbi":true, "rep":"h5"}
- the “ip” is the IP grabbed from the field path “b.foo”.
- the “path” is the path where the ip was grabbed.
- the optional “age” represents the number of seconds since the last update of the given data
- zbi, when present and with the value true, means the IP is on MailSpike’s ZBI DB (see ). If false, it will be omitted.
- Rep, when present, will contain the reputation level according to the MailSpike’s reputation DB (L5..L1,H0..H5, see )
4.7.9 Module CC
- Module name: “cc”
- Module response: “_cc_*”
- Additional Features: none
The CC module augments certain values with Bitsight’s IP knowledge engine data, originally about Command-and-Control (hence the “cc” prefix), but currently with other types of malice tagging.
Currently the CC module applies to every value that is a valid IPv4.
Example: {“a”: 1, “b”:{“foo”: “127.0.0.2”, “bar”: “foo”, "_cc_b_foo":{"ip":"127.0.0.2", "path":"b.foo", "type":"zeus"}
- The “ip” is the IP grabbed from the field path “b.foo”.
- The “path” is the path where the ip was grabbed.
- The optional “age” represents the number of seconds since the last update of the given data
- The “type” will contain the type of thread identified by Bitsight
4.8 Multiple Filters
Whenever possible multiple queries shall be merged into a single query, instead of opening multiple connections for each query.
For example, to filter by multiple values (OR), a simple “_geo*.country_code=pt,gb,de” shall be used instead of three connections.
When a query can’t be constructed on a single line because the source messages are in different formats (different types) or the filters queries must be applied separately for each source, both queries can be joined together as a single connection.
To construct a multiple filter query, first ensure that each individual filter works, and then concatenate them altogether by setting the first filter to the original query parameters (filter #0), and for each parameter of sub-sequential filters add the number of the filter (starting from 1) to the parameter name. E.g.:
fields=_origin&modules=add&a.foo=0&fields1=_geo*.ip&modules1=geo,add&a1.foo=1
Note the case of regular parameters get suffixed with the filter number (modules, fields, kpi, kpitime, etc.), whilst field based parameters have the filter number before the path (f1.path, a1.path, etc.)
4.9 Query Updates
The Stream Platform allows changing the query while the stream connection is still connected, by using a separate channel to re-set the filters.
When a stream connection is opened, a random cookie is returned (“Set-Cookie: s=<hash>”), which is needed to correlate with the separate channel.
Using this same cookie, a request can be done to the endpoint “/setstream”, with exact same parameter format as the “/stream”, and this request will change the filters applied to the parallel stream connection.
The /setstream will return 200 OK right away, assuming the authentication and parameters are valid.
The connection order can be either one – the /stream or the /setstream first – as the cookie session is temporarily persisted (currently stays in memory, so it gets revoked if the server restarts or after a long while if too many sessions occur, with the older sessions (the ones not seen recently) being kicked out.
For example, let’s assume a connection to /stream?key=xxx&fields=foo will enable a flow of messages that contains the key “foo” at the root of the message. Then a separate call to /setstream?key=xxx&fields=foo,bar, using the same cookie as the one received on the /stream, will re-set the stream to flow all messages that contains either the key “foo” or the key “bar”.
With this feature, multiple /stream can be receiving the same data, if all connections use the same cookie. For example, having two /stream?key=xxx&f,__=__ (the __ sets the stream to return nothing, as there is no key “__” and much less one with that value), both with the same cookie, and then re-setting the fields with the /setstream and the same cookie will turn on both stream connections to the right filter.
As the cookie is tied to the key and there is no possibility of cookie collisions between multiple users, a workaround to avoid processing the HTTP response and the cookie header is to set the cookie to any value the Subscriber wants, with the care that no values collide if multiple subscribers owned by the same key are connected. The Stream Platform accepts any ASCII string as the cookie value.
The cookie can also be passed as a query parameter “s=xxx”, which is required for the case of WebSockets as the WebSockets spec does not allow to manipulate the HTTP headers, and some browsers may not guarantee that the cookie received on the first WebSocket request, or a regular HTTP request, will be sent back correctly. In a browser situation with WebSocket connection it’s safer to generate a unique hash and pass it directly over the query-string. The query string parameter overrides any value on the cookie header.
Information on Future Version: A future version may stop returning cookies and rely on the client to pass its own session value only when the session feature (e.g. setstream) will really be used.
5 Credentials
The Stream API can be configured to allow authentication via credentials, using that data to exchange with a temporary key that can then be used for any other request, until that key gets revoked.
To perform this exchange, the "/auth" endpoint shall be called, using POST, and passing the following parameters:
- "key" → the "application key"
- "user" → the username
- "hash" → the hash of the password
The "hash" is calculated by MD5(key+"::"+user+"::"+pass) in lower case.
The response will either be a 403 if any parameter is incorrect, or a 200 OK with a JSON response containing the token (temporary key):
"{"token":"<temporary key>"}"
6 Asynchronous API calls
Sometimes some information provided by the modules described above is not required for each single message, but only to a small subset of content, especially when the content provided by the modules occupies too many resources. An example is retrieving the reverse name for each IP on the stream vs. retrieving the reverse names only for a small subset of IPs seen on the stream – e.g. the top IPs. For that there is an API that allows to asynchronously request information that is then returned back through the same stream channel.
Each module above may available as an API, but there are other API modules that can’t be applied to all messages, or due to its meaning, wouldn’t even make sense to be there.
To access the API, a similar flow than the /setstream above shall be used, with the difference that the /stream must be connected before the call to the API happens. This means the cookie used on the /stream is to be used on the call to the /api, to correlate the request with the path to return the message.
An API response over the stream can be identified by the key “_service”, which contains the same value passed to the /api?service=xxx, as well as the relevant values given on the /api call will be replayed back on the stream response.
For example, picking the same example above of retrieving the PTR of a list of IPS:
- Start a /stream connection with the relevant filters as usual, taking note of the cookie used on that connection. For testing, a trick of using the “f.__=__” to return no other message can be used.
- Then in parallel perform a call to:
/api?key=xxx&service=rev&ip=<ip1>&ip=<ip2>
- Over the stream a message will be received, similar to:
{"_origin":"api", "service":"rev", "data":[ {“ip”:”<ip1>”, "rev":"<rev">}, ]}
- Each parameter (in this case, two “ip”) will appear as a list inside the "data"
6.1 API Ping
- Service Name: ping
- Parameters: none
- Returns: 200 OK if key valid, 403 if key invalid
The Ping API service can be used to quickly verify the connectivity, as well as validate if the "key" is still valid.
"/api?key=<key>&svc=ping"
The Stream Server will return 200 OK if the key is valid, or 403 in case the key is no longer valid.
This service shall be used for the temporary key assignment, from credentials, to validate if the temporary key is still valid, or if the credentials needs to be requested from the user, again, and a new key be exchanged.
6.2 Modules API
6.2.1 API Geo
Service Name: geo
Parameters: ip (one or more)
6.2.2 API IPRep
Service Name: iprep
Parameters: ip (one or more)
6.2.3 API CC
Service Name: cc
Parameters: ip (one or more)
6.2.4 API URI
Service Name: uri
Parameters: uri (one or more URI or hostnames)
7. References and tables
7.1 References
Document | Topic | Date/Version |
---|---|---|
JSON Serialization | - | |
HTTP1.1 Specification/Chunked | V1.1 |
Table 7-1 References
7.2 Document Revision
# | Primary Author(s) | Reviewers/Approvals | Date | Nature of change |
---|---|---|---|---|
0.9 | Technical PM | PM | 2013.10.21 | Final Technical Draft |
1.0 | Marketing/Documentation services | PM | 2013.10.21 | Text revised |
1.1 | Technical PM | PM | 2014.06.01 | Updated features. |
Feedback
0 comments
Please sign in to leave a comment.