# Complex debugging php with Valgrind [FreeBSD 7.3, LogStash]



Have you ever wonder how php errors affect performance? Let’s consider php script which produces 2000 errors of type E_NOTICE during 5 requests (500 errors/per request).
Every time the script is being run, compiler handles errors somehow. So the question is "How? What can I get by fixing those "less important" issues?" (Except obvious benefit which is better code 🙂 ).

In this article we’re going to take a deep look into php generated errors and we’ll try to analyze results and draw some conclusions from that.

Installing Valgrind on FreeBSD 7.3

What exactly is Valgrind?
According to webpage: Valgrind is an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail.

What do we need it for?
Instead of grabbing logs by php parser, we will step down by one level and see what zend engine is doing "under the hood" 😛

Installation on FreeBSD 7.3

# Checkout sources
hg clone https://bitbucket.org/stass/valgrind-freebsd

# Build
sh autogen.sh
./configure or ./configure --prefix=/home/mkaczanowski/vt/
gmake install

If installation successed, try to run php script with valgrind

valgrind --tool=callgrind --vgdb-prefix=/home/mkaczanowski/loc_temp/vgdb-pipe --dump-instr=yes --trace-jump=yes -v /usr/local/bin/php-cgi -f test.php

It is very likely tha you will get an error

==79162== error 22 Invalid argument
==79162== mknod /home/mkaczanowski/loc_temp/vgdb-pipe-from-vgdb-to-79162-by-mkaczanowski-on-???
==79162== valgrind: fatal error: vgdb FIFOs cannot be created.

Problem is that, FreeBSD has old syscall.h. "mkfifo" is merged to "mknod" in newer versions of freebsd, that is why we have to mess with the code:)

# Open file: coregrind/m_syscall.c
nano coregrind/m_syscall.c

# Find line starting from
elif defined(VGP_amd64_freebsd)
UWord val;

# Add below
// Redirect all system calls from "mknod" to "mkfifo"
if(sysno == 14) sysno = 132;

# Complie again

# Run again
valgrind --tool=callgrind --vgdb-prefix=/home/mkaczanowski/loc_temp/vgdb-pipe --dump-instr=yes --trace-jump=yes -v /usr/local/bin/php-cgi -f test.php

If execution of valgrind was sauccessful, you will see the file called ‘callgrind.out’.

Installing Logstash, Kibana and Elasticsearch

First step is downloading the logstash.

wget "https://download.elasticsearch.org/logstash/logstash/logstash-1.1.13-flatjar.jar"

Logstash in version 1.1.13 requires elasticsearch 0.20.6, so download it too. (elasticsearch is database engine used by Logstash, for more information please) see documentation)

wget "https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.6.zip"
unzip elasticsearch-0.20.6.zip

Kibana is the frontend for elasticsearch. In order to install it we have to perform:

git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git
cd Kibana
gem install bundler
bundle install

If errors occur I recommend to check version of ruby ruby -v and use rvm.
Some links that might help:

# Change in KibanaConfig.rb
Elasticsearch = "elasticsearch:9200"

At this point you should have Logstash, Kibana and elasticsearch installed.

UPDATE: Kibana Milestone 3 it’s a purely web frontend, so there is no need to use ruby at all

Get php callstack dump

Every time that an error occurs, some function has to handle it. We would like to know what is the function, how many times is called and what is the total time spent on handling errors. We can do it using Valgrind!

One could say, "Why don’t you use XDebug???" XDebug shows only the upper level (it doesn’t show the interesting part which is zend engine part). If I am wrong, correct me 🙂


# Run
valgrind --tool=callgrind --vgdb-prefix=/home/mkaczanowski/loc_temp/vgdb-pipe --dump-instr=yes --trace-jump=yes -v /usr/local/bin/php-cgi -f test.php

# Now you should have a file callgrind.out.randNumberHere


Kcachegrind is a program which interprets the output of callgrind.

# Install
sudo apt-get install kcachegrind

# Run

Now open log. You should see smth like on screenshot:

Silent php error logging

It happens sometime that you have so many errors that it’s impossible to run any part of application with errors shown on the screen. Developers usually suppress hoping to fix them in a few next days (never happens).

One of the solutions is to create silent php logging function, which handles errors and saves them to the file.

static $sessionKey = null;
public function silentLogErrors($logToCustomFile = false, $filename = null){
    if($sessionKey == null) $sessionKey = microtime();

    $testKey = $sessionKey;
    $logFunc = function ($errno, $errstr, $errfile, $errline) use ($logToCustomFile, $filename, $testKey){
        $errType = "";

            case E_NOTICE:
                $errType = 'E_NOTICE'; break;
            case E_STRICT:
                $errType = 'E_STRICT'; break;
            case E_DEPRECATED:
                $errType = 'E_DEPRECATED'; break;

            //case E_NOTICE:
            case E_DEPRECATED:
            case E_STRICT:
                $errMsg = $errType.": ".$errstr." FILE: ".$errfile." LINE: ".$errline." KEY: ".$testKey;

                if($logToCustomFile == true && !empty($filename)){
                    error_log($errMsg."\n", 3, $filename);
                    error_log($errMsg, 0);

                return true;

        return false; // Let the PHP error handler handle all the rest

    set_error_handler($logFunc, E_NOTICE | E_DEPRECATED | E_STRICT);

If $logToCustomFile is turned on, then the output is written to file otherwise to apache log.

If code is not well implemented then your log should be full of errors 🙂

# Read them
tail -f "yourLogFileHere.log"

Parse php log via logstash

At this point we collected some logs using php error handler. Each row in the file should have construction:

E_NOTICE: Undefined index: test FILE: /home/mkaczanowski/test.php LINE: 125 KEY: 0.18754800 1375884841

What if we want to read thousands of rows in a log? Messy, isn’t it?
Here comes Logstash with quite neat feature which is grok parser. With that tool extracting, searching, sorting logs is a piece of cake:)

First of all we need to run elasticSearch engine:

# Go to elasticsearch directory and then
sudo ./bin/elasticsearch -f

In the second terminal run kibana (or if you prefer run kibana 3 as web application on http server) and configure Logstash:

# Go to logstash direcotry
cd logstash/

# Create configuration and edit
touch simple.conf && nano simple.conf

input {
    stdin { type => "stdin-type"}

    tcp {
        type => "apache-error"
        port => 3333

filter {
    grok {
        type => "apache-error"
        add_tag => "%{php_error_type}"
        match => [ "@message", "(?(E\_STRICT|E\_NOTICE|E\_DEPRECATED)+)?: (?.*(?=\sFILE))? (?.*(?=\sLINE).*(?=\sKEY))? (?.*)" ]

output {
  stdout { debug => true debug_format => "json"}
  elasticsearch {
    host => ""

# Run logstash
sudo java -jar logstash-1.1.12-flatjar.jar agent -f simple.conf

# Import logs
ssh mkaczanowski@localhost cat ~/logs.log | nc localhost 3333

If you need to modify regular expression then I recommend:

At last, see your data in Kibana:


We’ve reached the end of this article, so lets do summary. We know how to:

  • debug with Valgrind and KCachegrind
  • log php errors silently
  • write expressions for grok
  • use Logstash and Kibana

Setup is quite easy. I would say that there is no complicated or very difficult tasks to be done. But! It requires time, so be prepared for it:)