organizing large datasets that reference look up info from outside sources

mike tancsa mike at sentex.ca
Thu Oct 19 15:42:20 EDT 2017


Hi folks,
	I have a lot of network flows that I need to answer questions about
that are not easily done with a simple argus command and am looking for
suggestions on how best to organize the data.


I have 2 large classes of endpoints -- servers and support staff.  The
servers are further broken down by department.

to complicate matters, the servers are all on dynamic IPs and some come
and go a few times a day and will get different IP addrs as do the
support staff.

I get general questions like

Did anyone access server x,y and z in the last 3 days via netops and when ?

or

Who has accessed all of the Accounting Department's servers in Toronto
yesterday?

I can easily tell by port (6502) if there was remote access to a server.
 But its the matching of "server" to sets of IPs by time as well as
matching the user's IP address to the users x509 token name.

Here are some of the challenges I have

Did anyone access server x,y and z in the last 3 days ?
I first need to figure out the IPs and their start and stop times for
those IPs the servers were on so I can see that server 'x' was IP
address 192.168.77.3 from Sep 1 3pm to 9pm and then IP address 10.2.3.4
from Sep 1 10pm to Sep 12 at 1pm and so on.  I could have a few dozen
IPs that the server would have in the time period I am interested in.
This is from a mysql database of start and stop records.


I can do a lot of manual steps to answer these questions, but manual
steps take time and are prone to error. I havent had to do this very
often in the past, but am seeing it more and more so I am looking for
strategies to better prep the data ahead of time to make my life easier
and answer these questions quicker.

Any suggestions / pointers would be most welcome. I am thinking I want
to look into using ralabel ?

Perhaps as part of a daily script,

* identify the IP(s) of the server for that day and their times.
* Query out all their flows and label them with the server:department
and drop them into new argus files based on the server:department


Is there a better way to approach this ?

	---Mike



More information about the argus mailing list