This is a minimal template to get info from your Memcached server from two possible places. Via zabbix-agentd in clients or via externalscripts in zabbix server. Choose your option.
Monitoring information by now:
'bytes',
'cmd_get',
'cmd_set',
'curr_items',
'curr_connections',
'limit_maxbytes',
'uptime',
'get_hits',
'get_misses',
And the special HIT-ratio in %:
'ratio'
Installation in the Zabbix Server
You should look for the external scripts directory in your Zabbix Server configuration file. In the CentOS 6.5 RPM Zabbix installation is: /usr/lib/zabbix/externalscripts
Copy the python script there. A chmod/chown to get execution permission is necessary.
Now, in your Zabbix frontend: Configuration-Templates section, Import bottom in the right side.
Choose the XML file and import.
Apply this new template to your Memcached servers.
You don't need to modify the template if you are using the standard port to access to the Memcached (port 11211).
It permits a fast configuration because of you can apply the same template to all your memcached servers without modification/installation in the agents.
Of course, it can be to work in the agent/client side too.
Do you have a lot of connections because of a DOS attack? or, perhaps your mysql server has a lot of connection-storms? Do you need to know what is the exact number of those TCP connections?
Ok... there we go!
Install wireshark for terminal in your Linux and later write:
tshark -f 'tcp port 80 and tcp[tcpflags] & (tcp-syn) !=0 and tcp[tcpflags] & (tcp-ack) = 0' -n -q -z io,stat,1 -i eth0 -a "duration:10"
"port 80" could be "port 3306" or "port whatever-you-want"
"eth0" and "duration:10" can be changed too.
Description:
During 10 seconds tshark is capturing traffic. After that, it will write a report with your connection count each one second (Frames field).
Very fast python script to take a fast look at your glassfish server.log file.
Output:
Fecha Inicio: [2014-10-13T23:54:54.372+0200] Fecha Fin: [2014-10-16T13:46:22.230+0200] Total INFO: 826 Total WARN: 126 Total SEVERE: 2341 Total ERROR: 96 Total Processing: 3389 Total Exceptions: 0 Total logfile lines: 13646
The script:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I noticed a problem when using nginx as a load balancer in front of servers that are the target of large and numerous uploads. nginx buffers the request of the body and this is something that drives a lot of discussion in the nginx mailing lists. This effectively means that the file is uploaded twice. You upload a file to nginx that acts as a reverse proxy/load balancer and nginx waits until the file is finished uploading before sending the file to one of the available backends. The buffer will happen either in memory or to an actual file, depending on configuration. Tengine was recently brought up in the Ceph mailing lists as part of the solution to tackling the problem so I decided to give it a try and see what kind of impact it’s unbuffered requests had on performance.
Tengine (https://github.com/alibaba/tengine) is a web server originated by Taobao, the largest e-commerce website in Asia. It is based on the Nginx HTTP server and has many advanced features. Tengine has proven to be very stable and efficient on some of the top 100 websites in the world, including taobao.com and tmall.com
At the moment, it is not possible avoid the buffering in the POST requests in NGINX. If you are working uploading large files to a backend, you know i am meaning.
Tengine has a patch (by yaoweibin?) to solve it and it appears as a feature in its webpage: http://tengine.taobao.org/
Sends unbuffered upload directly to HTTP and FastCGI backend servers, which saves disk I/Os.
Hello....my new mydumper RPM package for Centos 6.5 is here. From the original Changelog:
Bugs Fixed:
1347392 last row of table not dumped if it brings statement over statement_size
1157113 Compilation of latest branch fails on CentOS 6.3 64bit
1326368 Can't make against Percona-Server-devel-55 headers
1282862 unknown type name 'HASH'#1336860 k is used twice
913307 Can't compile - missing libs crypto and ssl
1364393 rows chunks doesn't increase non innodb jobs- TokuDB support- Support to dump tables from different schemasNew Features:- --lock-all-tables (instead of FTWRL)
If you are looking for a template to write a fast copy-and-paste script (daemonized) for monitoring one process... you can get some ideas from this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Two days ago i visited a friend of me. I was showing him how to discover open ports in a network with nmap. As example I chose his home gateway. His gateway is/was a WAG54G from Linksys. Without firmware uptated. Year 2009, more or less.
He was surprised when we discovered two weird ports in the gateway: 5190 and 5566 ports. He didn't have any port configured to be open but I ignored that because I thought they were some private ports to configure the gateway from a GUI in Windows or something similar.
We were speaking and connecting to some ports for a while.... and I figured out the behavior of 5566 port was different after connecting to the 5190 port. It was very odd.
If you connect to the 5566 port you will receive a fast disconnection but if you make a connection to the 5190 port before, the 5566 port is waiting data in a lot of cases. Bye bye to the fast disconnection.
So, for my curiosity i tried to fuzz the port with /dev/urandom.... something similar to:
cat /dev/urandom | nc -v 192.168.1.1 5566
Meanwhile...when I was explaining to my friend about how a protocol works, our internet connection was down.
BOOM! o_O
I said: - Ermmm... Let me try it again buddy! ..and...
BOOM! O_O
I had to come back to home. In the way, I was thinking about the port issue.
At home, my first step was to see if those ports were private or public.... and BOOM! Public!
But I didn't have any router Linksys at home to try it again... so my friend Shodan came to the rescue! ;)
The next steps are very bored but, more or less, it was a huge trial and error process and, more or less , this was the PoC conclusion:
Ingredients:
Very old linksys: Shodan have a lot of them.
Ports 5190 and 5566 open: Shodan has few of them but it has them.
Nmap to confirm some data.
There we go:
$ nmap -sS -p 5190,5566 x.x.x.x
Starting Nmap 5.51 ( http://nmap.org ) at 2014-09-06 08:33 CEST
Ok. Ingredients Ok. We can run the PoC. Sometimes you need to run it twice or more times. Honestly I don't know why.
$ python OldLinksysMustDie.py x.x.x.x OldLinksysMustDie v0.001b PoC * Connecting to x.x.x.x... * Cooking... be patient.... * On Fire! >< >< >< [BYEBYE] Ooops! connection to the target lost. [BYEBYE]
Curl again:
curl -I -q --max-time 5 x.x.x.x curl: (28) Operation timed out after 5001 milliseconds with 0 bytes received
And RAMBO was here! I tried others one and... all temporary dead!!
NOTE: The router is recovered 5-10 minutes after running the PoC, but, if you want a big DOS, a "loop" in the computers world is so easy..... ;)
Finally, my horrible PoC (horrible like my English) in python:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There are RPM packages for collectd in CentOS 6.x, you will need the EPEL repository.
If you want a easy web interface to the graphs there is a collectd-web package there but, that package, is actually the collection3 frontend. So, if you prefer the original collectd-web, then you will need to install it from its github site. The most important will be to create the /etc/collectd/collection.conf file. Be careful, if you want to change the path you will need to modify the sources.
The collection file content for CentOS is something like:
datadir: "/var/lib/collectd/"
And a minimal configuration for the Apache would be like:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If you have a little headache using torify/torsocks inside of a script with nc and ncat, you have some interesting parameters to use and they are very very easy.
For example, when you have a tor socks proxy in localhost, using netcat (nc), it would be similar to:
Multi-threaded if pthreads or Win32 threads are available. Client and server can have multiple simultaneous connections.
UDP
Client can create UDP streams of specified bandwidth.
Measure packet loss
Measure delay jitter
Multicast capable
Multi-threaded if pthreads are available. Client and server can have multiple simultaneous connections. (This doesn't work in Windows.)
Where appropriate, options can be specified with K (kilo-) and M (mega-) suffices. So 128K instead of 131072 bytes.
Can run for specified time, rather than a set amount of data to transfer.
Picks the best units for the size of data being reported.
Server handles multiple connections, rather than quitting after a single test.
Print periodic, intermediate bandwidth, jitter, and loss reports at specified intervals.
Run the server as a daemon.
Run the server as a Windows NT Service
Use representative streams to test out how link layer compression affects your achievable bandwidth.
There are pre-compiled binaries for a lot of platforms.
Perhaps you want to leave a daemon running in your server. Great!.
RHEL6 Installation:
EPEL repository installed.
yum install iperf
Open a port in your firewall.
Save this init script in /etc/init.d:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In the name of Tutatis, why f**** the node_modules directories are different in npm and "distro" packages????. The error message is similar to:
Error: Cannot find module 'any_npm_module' at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at repl:1:7 at REPLServer.self.eval (repl.js:110:21) at repl.js:249:20 at REPLServer.self.eval (repl.js:122:7) at Interface.<anonymous> (repl.js:239:12) at Interface.emit (events.js:95:17)
Surprise!!!
CentOS, Fedora, Ubuntu, Debian and MacOSX-ports are using diferent path for the nodejs global modules. If you install the binary of "node" and "npm", and later, you choose to use npm to install some modules, your modules will be dissapeared.
Workarounds:
Compile your own node/npm version (LOL!).
Use local modules always (for command line scripts is very LOL!)
Use NODE_PATH to point to the correct path:
Standard profile: For only one user, you can use .profile, .bash_profile or .bashrc $HOME files (be careful if you are doing login, it uses only one kind of file so choose the correct one).
Developer Eclipse: You can configure environment variables in the properties of the main js file of your project. So you could put NODE_PATH there.
Daemon/Service profile: Launch from init.d. You could create a correct init.d script with the NODE_PATH inside ;)
Master of node global modules (one ring to bring them all):
Linux: you need to create /etc/profile.d/nodejs.sh with the export NODE_PATH path there. (And restart)
MacOSx: you need to create the crazy and non-existentz /etc/launchd.conf and write this line: setenv NODE_PATH /usr/local/lib/node_modules. And restart.
Someone in Debian (and Ubuntu then) chose to eliminate "node" binary from nodejs package. Now the binary is "nodejs". So, for Eclipse, and some scripts, node does not work although nodejs package is installed!
Well.... the same guy has preferred to create a second package to solve that. The package is "nodejs-legacy".
So, please, poor Debian/Ubuntu user, you need to remember that when you install node:
Sometimes the RHEL world has odd things. For example:
If you want to configure a network interface using NetworkManager (without X!!!) you can do it with system-config-network script. From here, you can change the network configuration and DNS/hostname of the box.
What is the problem then?
The network interfaces could be enabled or disabled in boot time. You can see that in the configuration file (etc/sysconfig/network-scripts/ifcfg-<interfacename>) It is something like:
ONBOOT=yes or ONBOOT=No
But we are talking about using NetwortManager (in that file NM_CONTROLLED=yes) so you cannot edit and change that value directly.
What is the way then?
You go to the famous system-config-network script and you will discover there is not option to change the ONBOOT value.
WTF!
Well, the way to change this value is very crazy.
You will need export the NetworkManager configuration from terminal:
system-config-network-cmd > export.cfg
Modify the correct line of your network interface. For example:
mtr combines the functionality of the 'traceroute' and 'ping' programs in a single network diagnostic tool.
And MTR 0.85 has a new feature very interesting. Its git repo says:
Add -z / --show-ip support This new option displays both the hostname and the IP address of a host in the path. This is particularly important when the hostname determined by a reverse lookup has no corresponding forward record. This is similar to the -b (both) option in tracepath, but -b was already taken in mtr for --bitpattern. Using this option also implies -w / --report-wide as this option isn't terribly useful without it in report mode. In general we endeavor to only show the IP address once per line, but in the split view we consistently add a separate field for IP address to the output. Signed-off-by: Travis Cross <tc@traviscross.com>
So i have created a new RPM package for the 0.85 version. It is only for terminal. No X sorry.
@vicendominguez if you look at any role page you will see the OS support list.
— Michael DeHaan (@laserllama) May 26, 2014
@vicendominguez no, there isn't yet. Search for the role purpose first and then open the highest rated ones. Also easy to modify.
— Michael DeHaan (@laserllama) May 26, 2014
So... if you find a interesing role may be you cannot use it. You need to verify role by role if it is correct for your OS or not..... there is not another way.
:(
In a project, if you have some Makefiles in differents directories and you need to compile all of them, you can try it with a "master" Makefile like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oh my ... guys. I am losing my battle with Apple. It makes no to me but it is happening. The past couple of days ... have been pure-shit. Grrr!
If you think your super expensive laptop made in the USA is better than a super cheap model made in China/Korea/Spain, then think again! Apple, as company, works hard to maintain its image as being perfect (including the support they provide).... but as I recently learned, they are no different than any other company. It is only an image for your brain, it is marketing.
In the 20 years that I have been using laptops, I have NEVER EEEEEEEVER had as many problems as I am having with my Apple laptop. I repeat NEVER has a battery exploded. Yes, you heard me! Exploded like a BOMB! I have had a number of laptops – including laptops from HP, IBM, Lenovo, and Acer - and have never had this happen to me. The batteries in those laptops generally last for five to six years. While that isn’t great, it’s standard for the market.
This time, after just four years, the battery in my macbook exploded. Oh man! This is not normal. You could tell me the battery would end its life without charge … but my macbook went from holding a charge for 3 hours (about 500 cycles) to exploding. Apple is telling me it is normal. WTF! That is not normal. It must be investigated. Either Apple didn’t listen to me or they don't want to listen to me. I expected the battery in my macbook to last for at leaste another 2-3 years. After all, that is the norm … and it is what I expected having paid such a hefty price for it. Grrrr!
So, now I ask myself - Why did I pay double the price for a macbook, if it will only last half the time? It makes no sense. All I can say is … Lesson learned!
My incident ID: 597671455. Apples response: "Normal end. We don't have a procedure to solve it"
(thanks for the translation Ken) Update1: 10.9.3 does not solve the issue (at the moment) Update2: I lost my battle. I have bought this battery: http://www.amazon.es/gp/product/B008XXK5XS/ref=oh_details_o00_s00_i00?ie=UTF8&psc=1 Update3: At the moment: two weeks with this new battery and i think the temperature is higher than the original battery when it is charging but it is ok. About 4.30h of battery autonomy.
Fraunhofer FDK AAC codec library. This is currently the highest-quality AAC encoder available with ffmpeg. Requires ffmpeg to be configured with --enable-libfdk_aac (and additionally --enable-nonfree if you're also using --enable-gpl). But beware, it defaults to a low-pass filter of around 14kHz. If you want to preserve higher frequencies, use -cutoff 18000. Adjust the number to the upper frequency limit you prefer.
Ok. Here my own RPM of that library for CentOS 6.5:
Well.. perhaps you need SSL in your site but you want to exclude a concrete path / url. This is my config:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
mysql> drop database my-web;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-web' at line 1
mysql> drop database "my-web"; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"my-web"' at line 1
mysql> drop database 'my-web'; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''my-web'' at line 1
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Hello!
Fast trick to redirect ports when you are using an java application server.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Be careful because 80, 443, 8080 and 8181 ports will be open to Internet. If you want to close the 8080 and 8181 you will need an AJP proxy or to mark packets in the firewall. Read this link:
"Management Software and Tools" tag and download the MegaCLI for Linux. I am using CentOS 6.x. Inside of the zip file there is a RPM. It works!
To show all cards in your box:
/opt/MegaRAID/MegaCli/MegaCli64 AdpAllInfo -aALL
PS: Check check!
$ /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -LALL -aALL | grep State
State : Optimal
Zabbix 2.2 comes with support of loadable modules for extending Zabbix agent and server without sacrificing performance.
A loadable module is basically a shared library used by Zabbix server or agent and loaded on startup. The library should contain certain functions, so that a Zabbix process may detect that the file is indeed a module it can load and work with.
Loadable modules have a number of benefits. Great performance and ability to implement any logic are very important, but perhaps the most important advantage is the ability to develop, use and share Zabbix modules. It contributes to trouble-free maintenance and helps to deliver new functionality easier and independently of the Zabbix core code base.
I have created a agent module to parse the /proc/net/sockstat info for Zabbix > 2.2.x
You will be able to watch the orphan sockets or the timewait sockets. They are interesting for: DDOS detection, leaks in webapps services etc etc...
This is a minimal template to get info about your wowza rest url in your Zabbix Platform.
Two items, by now:
Global connections in the Wowza
Global Live streams number
The template uses Zabbix macros to define the user/pass Wowza server url. It permits a fast configuration because of you can apply the same template to all your wowza hosts and to change the user/pass usermacros per host only.