top of page

Business in the 5th Domain – Logging

Dr Craig Valli and Dr Ian Martinus

Logging the good kind

Most compute devices have in their core abilities logging (recording of activity) of some type. This is where actions or errors of the actual component being used is recorded. These “logs” are sometimes the only viable data point that may have evidentiary value in a forensic examination of a compromised system. At another level they are an excellent source of business intelligence for an organisation with respect to the organisations IT systems cyber security posture. They record the Who, When and What happened for a system or system of systems.

One of the first things cyber criminals attempt to do is subvert a systems ability to keep logs and record their activity. This type of event is an almost unequivocal indicator of compromise of a system by criminals. Most cyber security laws prohibit the unauthorised use and modification of data or systems but if your logs do no record it…well.

Logging breaks into 2 broad areas of overview that is operational and then behavioural.

Operational logging

This monitors for instance device performance, load, use of hardware resources such as CPU, memory, disk storage and is used to optimise performance of resources. Examples of use are:

  1. generating information for instance to see what usage a new system is getting

  2. how much network bandwidth is being consumed

  3. how full are memory and data storages

Behavioural logs

These logs are the what/how a user or a system interacts with the monitored feature within a system and is typically used to inform and enforce organisational policy. Examples of use are:

  1. Monitoring employee use of the Internet

  2. Monitoring employee use of email

  3. Logging database interactions

What is in a log?

Typically they are field delimited line of text with typically a timestamp of varying precision as the index for the file. One of the problems is that logfiles are rarely uniform or sans “enhancements” on an already existing standard be that published or “de-facto”. Correspondingly there are tools of varying abilities and that are producing outputs sometimes of no practical value in securing your systems or recovering from an actual incident. However, logging and the resulting logfiles for most critical applications run on servers are useful for cyber security purposes. They also have utility in making your system more resilient through a better cyber security posture and increased situational awareness.

Log files typically record an “event” on your system, these fall into two major categories normal and abnormal. An event is when for instance a client requests a web page from your webserver and it serves the page, a user sends or retrieves mail, creates a transaction in a sales system or a device performs a task and records the outcome successful or otherwise of the process. The logfiles normally break into 4 groupings which si

  1. date/time – in varying formats

  2. origin/requestor – normally in the form of an IP or username

  3. destination/target – normally in the form of an IP/service name e.g apache, postfix, squid, smtp

  4. outcome – what actually transpired

Some examples

Squid Internet Cache 1647393521.293 185700 192.168.12.51 TCP_TUNNEL/200 5587 CONNECT classify-client.services.mozilla.com:443 – HIER_DIRECT/34.98.75.36 – 1647393531.293 180670 192.168.12.51 TCP_TUNNEL/200 4839 CONNECT content-signature-2.cdn.mozilla.net:443 – HIER_DIRECT/13.32.125.59 Postfix (Mail) Mar 16 09:30:06 sohu-beta-001 postfix/pickup[157853]: 0BC59CD5F9: uid=0 from= Mar 16 09:30:06 sohu-beta-001 postfix/cleanup[158746]: 0BC59CD5F9: message-id=<20220316013006.0BC59CD5F9@sohu-beta-001> Mar 16 09:30:06 sohu-beta-001 postfix/qmgr[5162]: 0BC59CD5F9: from=root@box.safensecurecyber.com, size=39156, nrcpt=1 (queue active) Mar 16 09:30:06 sohu-beta-001 postfix/local[158748]: 0BC59CD5F9: to=root@box.safensecurecyber.com, orig_to=, relay=local, delay=0.09, delays=0.06/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to mailbox) Mar 16 09:30:06 sohu-beta-001 postfix/qmgr[5162]: 0BC59CD5F9: removed Web Server

Processing logfiles a note For every type of logfiles there is at least most probably 5-20 different tools that can process these logs for you into textual and graphical reports or combinations of both. We will not cover them here in this article but in separate articles around securing critical technology types.

So what can logfiles once processed tell us?

In short basically a lot and with some to a forensic level of integrity and being suitable for admission as evidence in criminal proceedings. Logfiles once processed in meaningful ways allow us to extract intelligence and establish “norms” i.e normal patterns of behaviour for your systems. Then once you know what is normal correspondingly you know what abnormal looks like. Now abnormal does not always mean “bad” but its normally a strong indicator something could be going that way shortly.

Some examples

  1. Your email server starts to send a large volume of emails at 1am in the morning for 10 minutes and stops everyday.

  2. You web server gets 1000 requests to in less than 15 seconds, when you normally get about 1000 per hour

  3. There are 10,000 admin/root login attempts on your router/server/application (why admin/root it normally does not “diasble” the account for failed attempts, it also has the most power)

  4. All of the core temperatures and utilisation of your CPU and GPUs are now constantly at 80% or better

All of the above examples would indicate a problem with your systems indicating attempts at compromise or actual compromises. Whatever, they warrant investigation and the first place is the logfile details that produced these observable outcomes.

Storing logfiles Let us kill the old hoary chestnut that logging slows down the system and consumes hard drive space rapidly. Logfiles are typically text files (ASCII) and compress by at least 90% so 10GB file compressed is 1GB in size. On modern compute servers or desktops probably not true unless you have logging level (amount and type of data recorded) set to the most verbose and are using arcane machines. However even at this level we posit with multi-core systems and large terabyte storages not really a thing anymore.

  1. We advise you retain your logfiles for at least 2-3 years…yes 2-3 years.

  2. You should also back them up in your normal backup process

  3. Backups should be stored offsite as well

  4. You should store them compressed and encrypted as well and do this before sending to remote backup spaces


8 views0 comments
bottom of page