Replies: 1 comment
-
Google VPC Flow logs are sampled, and are only generated for TCP, UDP, ICMP, ESP, and GRE traffic flows, but that's a pretty good set of flows to work with. You can still do some forensics analysis with what is provided, flow object inventories, resources used, baseline development and assurance checks for access control, just to name a few. When Google VPC Flow logs are generated they can be forwarded to a number of log sinks as JSON data, which is perfect for our raconvert.1 strategy. Logs can be routed to Cloud Storage, Pub/Sub topics, BigQuery and to Cloud logging buckets. Argus has a data architecture that fits most of these data strategies. The simplest is Cloud Storage, which a number of sites use now to store their argus data, either as binary or as JSON. Argus scripts can periodically access Cloud Storage buckets, convert to Argus and then process the data, so to match against 3rd party intelligence data, or to compare with baselines that were developed to do anomaly detection (just as examples) ... or to do policy conformance and verification for access control ... or to do machine learning (ML) ... Pub/Sub topics categories are also very well suited to our raconvert.1 strategy, as raconvert.1 can run as a daemon reading realtime data, converting to Argus binary records and then processing the data for realtime awareness. How to setup a raconvert.1 instance as a Pub/Sub topic in Google would be an interesting project. |
Beta Was this translation helpful? Give feedback.
-
Now that we're converting zeek conn.logs using a conversion configuration file strategy, we can extend that to convert any ascii based flow log to argus binary.
Next project is to convert Google / AWS VPC flow logs to Argus binary ... if you have any sample flow logs you can share, I can take a shot at generating a raconvert.google.conf file or a raconvert.aws.conf file ...
Beta Was this translation helpful? Give feedback.
All reactions