Category Archives: Log Processing

Qmail Statistics

Being a sendmail guy, I was wondering if qmail had any mail log parser. I came across qmailanalog but then, it came with no useful docs on using it. I got some info on the net, and this is how you do it.

#!/bin/sh
#
PATH=/usr/local/qmailanalog/bin:/var/qmail/bin:$PATH
MAILLOG=”/var/log/maillog”
QMAILLOG=”/tmp/qmail.$$”
/usr/bin/awk ‘{$1=””;$2=””;$3=””;$4=””;$5=””;print}’ < $MAILLOG | matchup > $QMAILLOG
(
echo “To: rampras@domain.com”
echo “From: ram@domain.com”
echo “Subject: maillog”
echo “”
./zoverall < $QMAILLOG ./zfailures < $QMAILLOG ./zdeferrals < $QMAILLOG ./recipients < $QMAILLOG ./senders < $QMAILLOG) | qmail-inject -f ram@domain.com rm -f $QMAILLOG

This produces a very nice summary like the following:

Basic statistics

qtime is the time spent by a message in the queue.

ddelay is the latency for a successful delivery to one recipient—the
end of successful delivery, minus the time when the message was queued.

xdelay is the latency for a delivery attempt—the time when the attempt
finished, minus the time when it started. The average concurrency is the
total xdelay for all deliveries divided by the time span; this is a good
measure of how busy the mailer is.

Completed messages: 181
Recipients for completed messages: 234
Total delivery attempts for completed messages: 235
Average delivery attempts per completed message: 1.29834
Bytes in completed messages: 1651675
Bytes weighted by success: 1724065
Average message qtime (s): 2.6932

Total delivery attempts: 242
success: 220
failure: 14
deferral: 8
Total ddelay (s): 495.335359
Average ddelay per success (s): 2.251524
Total xdelay (s): 521.833090
Average xdelay per delivery attempt (s): 2.156335
Time span (days): 1.59635
Average concurrency: 0.00378347
Reasons for failure
………………
………………
………………
One line per reason for delivery failure. Information on each line:
* del is the number of deliveries that ended for this reason.
* xdelay is the total xdelay on those deliveries.
…………..
…………..
…………..

DNS monitoring using MRTG

Thanks to howe81, I am now able to monitor DNS server using MRTG. The script file required to monitor is located here.

The MRTG config file should look like the following:
Target[mydomain_DNS]: `/etc/mrtg/dnsstats.pl`
Options[mydomain_DNS]: gauge,growright,nopercent,integer,unknaszero
Title[mydomain_DNS]: DNS Server
RouterUptime[mydomain_DNS]: public@localhost
MaxBytes[mydomain_DNS]: 32000
AbsMax[mydomain_DNS]: 64000
WithPeak[mydomain_DNS]: wmy
Colours[mydomain_DNS]: YELLOW #F9C000,RED #F90000,LIGHT YELLOW #FFFFBB,LIGTH RED #FF8080
ShortLegend[mydomain_DNS]:queries/m
YLegend[mydomain_DNS]: Qs per Minute
Legend1[mydomain_DNS]: Queries received over 1 minute
Legend2[mydomain_DNS]: Failed Queries received over 1 minute
Legend3[mydomain_DNS]: Maximal Queries over 5 minutes
Legend4[mydomain_DNS]: Maximal Failed Queries over 5 minutes
LegendI[mydomain_DNS]:  Queries: 
LegendO[mydomain_DNS]:  Failures: 
PageTop[mydomain_DNS]: <H1>DNS Info</H1>
<TABLE>
<TR><TD>System:</TD> <TD>mydomain</TD></TR>
<TR><TD>Maintainer:</TD> <TD>Ram Prasad (ram@mydomain)</TD></TR>
<TR><TD>Description:</TD><TD>DNS Monitor</TD></TR>
<TR><TD>Details:</TD> <TD>Query Monitor</TD>
</TABLE>

AWStats Log Processor

I have been implementing AWstats log processors for various websites now. I find it really good, easy to configure maintain and use. I will be posting various scripts run/maintain config files.

Awstats can be found here:
http://awstats.sourceforge.net

and here are few features of awstats:
A full log analysis enables AWStats to show you the following information:
* Number of visits, and number of unique visitors,
* Visits duration and last visits,
* Authenticated users, and last authenticated visits,
* Days of week and rush hours (pages, hits, KB for each hour and day of week),
* Domains/countries of hosts visitors (pages, hits, KB, 266 domains/countries detected),
* Hosts list, last visits and unresolved IP addresses list,
* Most viewed, entry and exit pages,
* Files type,
* Web compression statistics (for mod_gzip),
* Browsers used (pages, hits, KB for each browser, each version, 78 browsers: Web, Wap, Media browsers…),
* OS used (pages, hits, KB for each OS, 31 OS detected),
* Visits of robots (307 robots detected),
* Search engines, keyphrases and keywords used to find your site (The 90 most famous search engines are detected like yahoo, google, altavista, etc…),
* HTTP errors (Page Not Found with last referrer, …),
* Other personalized reports based on url, url parameters, referer field for miscellanous/marketing purpose.

AWStats also supports the following features:
* Can analyze a lot of log formats: Apache NCSA combined log files (XLF/ELF) or common (CLF), IIS log files (W3C), WebStar native log files and other web, proxy, wap or streaming servers log files (but also ftp or mail log files). See AWStats F.A.Q. for examples.
* Works from command line and from a browser as a CGI (with dynamic filters capabilities for some charts),
* Update of statistics can be made from a web browser and not only from your scheduler,
* Unlimited log file size, support split log files (load balancing system),
* Support ‘nearly sorted’ log files even for entry and exit pages,
* Reverse DNS lookup before or during analysis, support DNS cache files,
* Country detection from IP location (geoip) or domain name.
* WhoIS links,
* A lot of options/filters and plugins can be used,
* Multi-named web sites supported (virtual servers, great for web-hosting providers),
* Cross Site Scripting Attacks protection,
* Several languages. See AWStats F.A.Q. for full list.
* No need of rare perl libraries. All basic perl interpreters can make AWStats working,
* Graphical and framed reports,
* Look and colors can match your site design,
* Help and tooltips on HTML reported pages,
* Easy to use (Just one configuration file to edit),
* Absolutely free (even for web hosting providers), with sources (GNU General Public License),
* Available on all platforms,
* AWStats has a XML Portable Application Description.

Rsync and Log Processing

To manage and process logs of multiple webservers, Rsync provides the best method for transferring the logs from servers to centralized log processing server.

There are two major steps involved
a) Configuring the Log Processing Server (IP: 192.168.1.1)
b) Configuring the client (say, www.myserver.com) to transfer the logs to the central server.

Configuring the Log Processing Server

Let this server have IP address 192.168.1.1. We create a directory , /usr/local/logs, where the log files would be downloaded. We create a subdirectory for www.myserver.com, under /usr/local/logs. (mkdir /usr/local/logs/www.myserver.com)

a. create a group logman and add user logman to it. This will be the uid/gid for the log files
b. edit/create /etc/rsyncd.conf, with the following details:

uid = logman
gid = logman
use chroot = yes
max connections = 4
log file = /var/log/rsyncd.log
pid file = /etc/rsyncd.pid

[www.myserver.com_logs]
comment = here are the apache access logs from www.myserver.com downloaded
path = /usr/local/logs/www.myserver.com/
hosts allow = www.myserver.com
read only = no

c. Now, run rsync:
# rsync –daemon

We have now successfully configured our server to received log files.

Configuring the clients to transfer the logs

on the client system (www.myserver.com), run this command periodically, to transfer the logs:
rsync -azvu /usr/local/apache/logs/access_logs 192168.1.1::www.myserver.com_logs

This way, the logs would be transferred to 192.168.1.1, and would be updated (not deleted and recreated, differential transfer) everytime.