Wednesday, October 29, 2014

Getting current TCP connection count on a Linux Server with tshark

Do you have a lot of connections because of a DOS attack? or, perhaps your mysql server has a lot of connection-storms? Do you need to know what is the exact number of those TCP connections?

Ok... there we go!

Install wireshark for terminal in your Linux and later write:

tshark -f 'tcp port 80 and tcp[tcpflags] & (tcp-syn) !=0 and tcp[tcpflags] & (tcp-ack) = 0' -n -q -z io,stat,1 -i eth0 -a "duration:10"

  • "port 80" could be "port 3306" or "port whatever-you-want"
  • "eth0" and "duration:10" can be changed too.

During 10 seconds tshark is capturing traffic. After that, it will write a report with your connection count each one second (Frames field).

| IO Statistics             |
|                           |
| Interval size: 1 secs     |
| Col 1: Frames and bytes   |
|          |1               |
| Interval | Frames | Bytes |
|  0 <>  1 |     10 |   740 |
|  1 <>  2 |    105 |  7770 |
|  2 <>  3 |      1 |    74 |
|  3 <>  4 |      0 |     0 |
|  4 <>  5 |      3 |   222 |
|  5 <>  6 |     85 |  6290 |
|  6 <>  7 |     16 |  1184 |
|  7 <>  8 |     31 |  2294 |
|  8 <>  9 |     72 |  5328 |
|  9 <> 10 |      3 |   222 |

That's all.

Thursday, October 16, 2014

Fast stats from glassfish server.log

Very fast python script to take a fast look at your glassfish server.log file.


Fecha Inicio:  [2014-10-13T23:54:54.372+0200]
Fecha Fin:  [2014-10-16T13:46:22.230+0200]
Total INFO:  826
Total WARN:  126
Total SEVERE:  2341
Total ERROR:  96
Total Processing:  3389
Total Exceptions:  0
Total logfile lines:  13646

The script:

Friday, October 3, 2014

Avoiding the "NGINX buffers the request of the body when uploading large files" issue

The problem with NGINX is perfectly commented by David Moreau Simard in his blog post: A use case of Tengine, a drop-in replacement and fork of nginx. The summary is in this paragraph: 
I noticed a problem when using nginx as a load balancer in front of servers that are the target of large and numerous uploads. nginx buffers the request of the body and this is something that drives a lot of discussion in the nginx mailing lists.
This effectively means that the file is uploaded twice. You upload a file to nginx that acts as a reverse proxy/load balancer and nginx waits until the file is finished uploading before sending the file to one of the available backends. The buffer will happen either in memory or to an actual file, depending on configuration.
Tengine was recently brought up in the Ceph mailing lists as part of the solution to tackling the problem so I decided to give it a try and see what kind of impact it’s unbuffered requests had on performance.
There are similar issues in a lot of lists:

I have made a fast "adaptation" of the Yaoweibin's no_buffer patch to the new nginx releases.

Weibin Yao (yaoweibin) is a MOTU working in the tengine project:

Tengine ( is a web server originated by Taobao, the largest e-commerce website in Asia. It is based on the Nginx HTTP server and has many advanced features. Tengine has proven to be very stable and efficient on some of the top 100 websites in the world, including and

At the moment, it is not possible avoid the buffering in the POST requests in NGINX. If you are working uploading large files to a backend, you know i am meaning.

Tengine has a patch (by yaoweibin?) to solve it and it appears as a feature in its webpage:
  • Sends unbuffered upload directly to HTTP and FastCGI backend servers, which saves disk I/Os.
There is a pending ticket to Nginx team requesting that: but there is not ETA:,253626,253705#msg-253705

Finally, i chose to adapt the Yaoweibin's patches ( to the 1.7.6 nginx version.

For me, it is working perfectly.

A CentOS RPM package is available in  our repo:

The new options in the conf file are:
  • client_body_buffers
  • client_body_postpone_size
  • proxy_request_buffering
  • fastcgi_request_buffering
The description of this new options is in this tengine page:

This patch is not necessary from nginx > 1.7.11