Sunday, July 3, 2016

Helper script to multiupload RPM files to Bintray repo (and remove packages too)

If you have a big local rpm repo and you would like to upload all the packages to Bintray, I  have forked and "have improved" the hgomez shell scripts.

The "upload shell script" is updated to the current Bintray API and i have improved the output.

The "delete shell script" is just to remove version of packages from Bintray which you have in the local storage but you didn't want in Bintray (and they were uploaded previously). It is a proof of concept for the API, nothing else.

Two details:
  • In the delete-script, I chose to use $1 parameter to point to the files (with wildcards).
  • In the upload-script, RPMS_DIRS is used in the upload curl url, so be careful here. We are using relative path to avoid a long path in the Bintray web.

Wednesday, May 4, 2016

RPM GPAC 0.6.1 for CentOS6

Our own version of GPAC is ready. Version 0.6.1 without any X dependencies.


Four binaries:

[root@core ~]# rpm -ql gpac |grep bin

Compilation flags:

[root@core ~]# MP4Box -version
MP4Box - GPAC version 0.6.1-revrelease
GPAC Copyright (c) Telecom ParisTech 2000-2012
GPAC Configuration: --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --extra-cflags=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -DPIC -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D_LARGEFILE_SOURCE=1 -D_GNU_SOURCE=1 --enable-debug --libdir=lib64 --disable-oss-audio --disable-x11 --disable-static --use-js=no

You'll find it in the ENETRES repo as always:


Thursday, April 21, 2016

git commit and the error: There was a problem with the editor 'vi'.

Error message:

error: There was a problem with the editor 'vi'.
Please supply the message using either -m or -F option.

Quick solution:

git config --global core.editor $(which vim)

Short-reason: If you are using vundler, or similar, you could find this error.

Monday, April 11, 2016

'include:' statement in Ansible does not find the correct path running on Vagrant

What is the issue?

You have a include statement inside a role but when the playbook is running on Vagrant, that path doesn't exist!!


├── playbook.yml
└── roles
    ├── base
    │   ├──
    │   ├── defaults
    │   │   └── main.yml
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   ├── main.yml
    │   │   ├── setup_debian.yml
    │   │   └── setup_rhel.yml
    │   ├── templates
    │   └── vars
    │       └── main.yml

In tasks/main.yml we have:

- name: Trying to update base installation on RedHat
  include: setup_rhel.yml
  when: ansible_os_family == 'RedHat'

- name: Trying to update base installation on Debian
  include: setup_debian.yml
  when: ansible_os_family == 'Debian'

If you run the playbook.yml in a Vagrant box you could see this error message:

FAILED! => {"failed": true, "reason": "the file_name '/Users/vicente/vagrant/ansible/playbooks/setup_debian.yml' does not exist, or is not readable"}

Well, this is the new tasks/main.yml to solve it:

# Include OS-specific installation tasks.
- name: Trying to update base installation on RedHat
  include: "{{ role_path }}/tasks/setup_rhel.yml"
  when: ansible_os_family == 'RedHat'

- name: Trying to update base installation on Debian
  include: "{{ role_path }}/tasks/setup_debian.yml"
  when: ansible_os_family == 'Debian


Friday, March 11, 2016

Magic Reboot - Emergency Reboot

Hi guys!

Context:  I want to reboot but.....

# reboot bash: /sbin/reboot: Input/output error 
# shutdown -r now bash: /sbin/shutdown: Input/output error


echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger


Friday, February 5, 2016

ffmpeg v2.8.6 + libx265 + libfdk-aac RPM for CentOS 6

I have created some RPMs from the newest version (v2 branch) of ffmpeg tool for CentOS 6:

[root@core ~]# ffmpeg
ffmpeg version 2.8.6 Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
  configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --disable-static --enable-runtime-cpudetect --enable-gpl --enable-version3 --enable-postproc --enable-avfilter --enable-pthreads --enable-x11grab --enable-vdpau --disable-avisynth --enable-libdc1394 --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-bzlib --enable-libass --enable-libdc1394 --enable-libfreetype --enable-openal --enable-libopus --enable-libpulse --enable-libv4l2 --disable-debug --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --disable-stripping --extra-libs=-lstdc++ --enable-libfdk-aac --enable-nonfree
  libavutil      54. 31.100 / 54. 31.100
  libavcodec     56. 60.100 / 56. 60.100
  libavformat    56. 40.101 / 56. 40.101
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 40.101 /  5. 40.101
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  2.101 /  1.  2.101
  libpostproc    53.  3.100 / 53.  3.100
Hyper fast Audio and Video encoder

Yes! I added x265 and libfdk-aac (used in Android) :)))

As always, available in the repo

Tuesday, January 12, 2016

Autoregistration of Raspberries/Servers with the same hostname on Zabbix

Possible project

  • At least, 200x Raspberries in different locations 
  • We can have two or three Raspberries in the same location.
  • Firewall in all locations
  • No starting date known
  • Instalallation from the same image so the hostname is the same in all of them.
  • Nobody can log in


 How to monitor those Raspberries with Zabbix?


Zabbix (2.2) uses the hostname as key field. It's not possible to repeat that hostname in the server so, how to proceed?


This is my workflow:

In zabbix-agent:
    • Configure agent as active, disable passive (firewall installed, do you remember it?)
    • I need to create a random hostname but i need to identify each Raspberry in the same/different locations. So i created this python script to get a hostname (i chose publicip-macaddress name):

I had to change the ":" in the macaddr to "_" because ":" is not valid char for the hostname field.

In the agent config you will need to run this script, two keys here:
You will need to put a Meta for the autoregister:
The agent is ready. 

For the server you will need to clone a OS Linux template to convert it as Active. You will need to change the Type item by item in the cloned template:

Now the new autoregistration rule.

Create a new Action and:


That's all.

Thursday, December 17, 2015

hubot + hangups + dokuwiki + zabbix

Fast post!

I have integrated dokuwiki and zabbix with hubot using hanhgups. This is a hell because you need to make a concrete calls with the hangups adapter.

I am using hungups api REST in hubot to receive the notifications. So we need to integrate that calls.

My shellscript template:

My Action:

Monday, November 16, 2015

Mydumper RPM v0.9.1 for CentOS 6.x (x86_64)

A week ago Mydumper jumped to the 0.9.1 with a lot of improvements. In the Percona blog we can read:

A significant change included in this version now enables Mydumper to handle all schema objects!!  So there is no longer a dependency on using mysqldump to ensure complex schemas are backed up alongside the data.
Let’s review some of the new features:
Full schema support for Mydumper/Myloader
Mydumper now takes care of backing up the schema, including Views and Merged tables. As a result, we now have these new associated options:
-d, --no-data Do not dump table data
-G, --triggers Dump triggers
-E, --events Dump events
-R, --routines Dump stored procedures and functions
These options are not enabled by default to keep backward compatibility with actual mixed solutions using Mysqldump for DDLs.
Locking reduce options
--trx-consistency-only      Transactional consistency only
You can think on this as --single-transaction for mysqldump, but still with binlog position. Obviously this position only applies to transactional tables (TokuDB included).  One of the advantages of using this option is that the global read lock is only held for the threads coordination, so it’s released as soon as the transactions are started.
GTIDs and Multisource Slave 
GTIDs are now recorded on the metadata file.  Also Mydumper is now able to detect a multisource slave (MariaDB 10.1.x) and will record all the slaves coordinates.
Myloader single database restore
Until now the only option was to copy the database files to a different directory and restore from it. However, we now have a new option available:
-s, --source-db                   Database to restore
It can be used also in combination with -B, --database to restore to a different database name.

As always, we have created a x86_64 RPM version for Centos 6.x in our repo:

Thursday, September 17, 2015

atrpms is dead and i need ffmpeg for CentOS 6

Berlin university seems to switch off the server running the atrpms repo. That is not official but after 10 days down it is easy to think it.

I have read some post in the CentOS list about this issue. In some answers,  people recommends to choose alternative repos. That is a crap. The recommendations are pointed to choose ffmpeg v0.10.x or similar.... oh come on! the last ffmpeg version today is 2.8.x ... is that serious?

In our infrastructure ffmpeg 2.2.x is valid for us, it is the atrmps version. So I have cloned, in the ENETRES Centos 6.x repo, the necessary packages (from atrmps) to install ffmpeg 2.2.x and mediainfo tool (with dependencies).

It you are in troubles, you can use our repo for that version install (note1: remember we have strong dependency from EPEL repo, note2: these packages are not official and they don't have any support).

Good luck! :)

Tuesday, September 8, 2015

Fast install on CentOS 6 for the old smokeping 2.6.8

As always i prefer the fast way....

When I was trying to install smokeping to make some tests to get latencies i found i had to compile it from sources and install some weird CPAN(perl) dependencies etc etc etc.... horrible to make a 5min-test.

So I "borrowed" some packages from here and from there..... and finally we have all the packages to make a "yum install smokeping" without problems.

This is the old 2.6.8 version so, mainly, forget ipv6 support.

As always,  for CentOS 6, at enetres repo.

Friday, June 26, 2015

varnish-vmod-geoip RPM package for Varnish > 4.0.1 CentOS 6

This Varnish module exports functions to look up GeoIP country codes and requires GeoIP library.

Module config and info here:



Monday, May 11, 2015

Ansible + Linode API + CentOS

Fast Mode ON! If you dont understand anything.... try to ask it in the comments.

Requirements for CentOS:
  • yum install pip
  • pip install linode-python
  • pip install chube


This values are from the API:

plan: 1           #cheapest 
datacenter: 6     #newmark NJ
distribution: 127 #centos 6.5

There are different values and you will need to ask them to the API so, to see full info of these three from Linode API (distributions IDs, datacenters and plans), you can run this nodejs script:

Dont forget the sudo npm install linode-api -g


Fast Mode OFF!

Monday, April 27, 2015

Forcing ansible playbooks to concrete hosts (and vagrant version)

This is a fast workaround to force to run a playbook to a concrete host.
Important: You must to have the host added to the ansible host inventory.

You will need to convert hosts to a variable. From:

- name: Installing base server template
  hosts: all
  gather_facts: true
   - base
   - ntpenabled


- name: Installing base server template
  hosts: '{{ hosts }}'
  gather_facts: true
   - base
   - ntpenabled

And now, in terminal for running the playbook:

ansible-playbook <playbook.yml> --extra-vars="hosts=<ip_or_hostname_here>"

and for vagrant:

  config.vm.define "test" do |test| = "chef/centos-6.6" "private_network", ip: ""
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/base.yml"
       ansible.sudo = true
       ansible.extra_vars = {
          hosts: "ip_or_hostname_here"

Tuesday, April 21, 2015

Ansible + Vagrant: forget your interactive prompts (SOLVED)

If you have a playbook with something like this:

- name: Installing test box
  hosts: all   
  connection: paramiko
     - name: "hosthname"
       hosthname: "Give me a hostname:"
       private: no
  gather_facts: true
   - base
   - redisenabled
   - nodebase

 And you are trying to run it with vagrant following this Vagrantfile piece:

  config.vm.define "test" do |test| = "chef/centos-6.6" "private_network", ip: ""
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/test.yml"
       ansible.sudo = true

This var (hosthname) is not interactive in Vagrant, you never will be asked.

What is the trick? I tried this workaround and i liked it:

  • Just in case i would create a default value for the variable.
  • Force the value of the variable in the Vagrantfile

So, the final config files would be:
  • Playbook:
- name: Installing test box
  hosts: all   
  connection: paramiko
     - name: "hosthname"
       hosthname: "Give me a hostname:"
       private: no
       default: "test01-default"
  gather_facts: true
   - base
   - redisenabled
   - nodebase

  • Vagrantfile
  config.vm.define "test" do |test| = "chef/centos-6.6" "private_network", ip: ""
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/test.yml"
       ansible.sudo = true
       ansible.extra_vars = {
          hosthname: "test01"

Wednesday, April 1, 2015

Boost C++ library RPM packages for CentOS 6

I have created some RPM packages from Boost C++ libraries, 1.54.0-8.20.2, 1.55.0,  1.56.0 1.57.0 1.58.0 and 1.59.0 for CentOS x64 (no 32bits sorry).

Building the Boost C++ Libraries with:

Performing configuration checks

    - 32-bit                   : no
    - 64-bit                   : yes
    - arm                      : no
    - mips1                    : no
    - power                    : no
    - sparc                    : no
    - x86                      : yes
    - lockfree boost::atomic_flag : yes
    - has_icu builds           : yes
warning: Graph library does not contain MPI-based parallel components.
note: to enable them, add "using mpi ;" to your user-config.jam
    - zlib                     : yes
    - iconv (libc)             : yes
    - icu                      : yes
    - compiler-supports-ssse3  : yes
    - compiler-supports-avx2   : no
    - gcc visibility           : yes
    - long double support      : yes
    - zlib                     : yes

Component configuration:

    - atomic                   : building
    - chrono                   : building
    - container                : building
    - context                  : building
    - coroutine                : building
    - date_time                : building
    - exception                : building
    - filesystem               : building
    - graph                    : building
    - graph_parallel           : building
    - iostreams                : building
    - locale                   : building
    - log                      : building
    - math                     : building
    - mpi                      : not building
    - program_options          : building
    - python                   : building
    - random                   : building
    - regex                    : building
    - serialization            : building
    - signals                  : building
    - system                   : building
    - test                     : building
    - thread                   : building
    - timer                    : building
    - wave                     : building

Easy to add:
sudo wget -O /etc/yum.repos.d/enetres.repo
sudo yum install boost-devel


Tuesday, March 3, 2015

ping to multiple hosts at the same time with fping

How to ping to multiple hosts with fping  showing what hosts are between 200ms to 999ms of latency to detect hosts with network issues in the LAN?:

IPVD.txt is a file with the IP list. No limits. I was using 400 IPs.
the "watch" tool to check the results periodically is cool.


Wednesday, February 18, 2015

supervisord in CentOS 7 (systemd version)


Fast installation in CentOS 7 for this "helper" to the queues service in laravel or django framework. EPEL package too old so:
  1. yum install python-setuptools python-pip
  2. pip install supervisor
  3. mkdir -p /etc/supervisord
  4. echo_supervisord_conf > /etc/supervisor.d/supervisord.conf
  5. forked systemd init script  (thx to Jiangge Zhang) in /usr/lib/systemd/system/supervisord.service:

  1. systemctl enable supervisord
  2. systemctl start supervisord

User=nginx is useful to run this process as nginx user. You can change it but the user must be in the system.

Monday, February 9, 2015

Nikto , sqlmap, Curl ... + avoiding CloudFlare challenge in CentOS6 in terminal (Solved)

It is possible you find with this situation.
  • No windows environment
  • Just text browser
  • You want to run a "nikto"
  • the target/host is protected with cloudflare.
Result: everything is false-positive:

+Server: cloudflare-nginx
+ Uncommon header 'cf-ray' found, with contents: 1aad22aaaaaaa7-MAD
+ Uncommon header 'x-frame-options' found, with contents: SAMEORIGIN
+ Cookie __cfduid created without the httponly flag
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ Server banner has changed from 'cloudflare-nginx' to '-nginx' which may suggest a WAF, load balancer or proxy is in place
+ "robots.txt" contains 1 entry which should be manually viewed.
+ lines
+ /crossdomain.xml contains 0 line which should be manually viewed for improper domains or wildcards.
+ Server leaks inodes via ETags, header found with file /favicon.ico, inode: 2221478, size: 1150, mtime: 0x4c35de66b2900
+ Uncommon header 'cf-cache-status' found, with contents: HIT
+ /kboard/: KBoard Forum 0.3.0 and prior have a security problem in forum_edit_post.php, forum_post.php and forum_reply.php
+ /lists/admin/: PHPList pre 2.6.4 contains a number of vulnerabilities including remote administrative access, harvesting user info and more. Default login to admin interface is admin/phplist
(a lot of more lines)

If you repeat it again with a "verbose" mode in other window like:

tcpdump -A -s0 port 80 |grep title

you will see:

 <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>
  <title>Just a moment...</title>

What is happening?

In some sites CloudFlare offers (for protection of the site) one challenge before the real webpage. There are two types:

  • Javascript challenge
  • Captcha challenge

The second option is the normal option when you are using Tor to visit the site. There is not a good solution for that option.

For the first one, the cloudflare-scrape project is our solution. You can develop whatever you want with that module for python.

For our problem, i was in CentOS, the procedure was:
  1. yum install python-requests #this is for installing the package dependencies but request library for centos is not enough, >= 2.x is a MUST 
  2. yum install python-pyV8 #look at my post about pyV8 RPM 
  3. yum install python-pip #to install the newest request module 
  4. pip install requests --upgrade #this is the correct module version
  5. yum install #our nikto in centos

git clone

and I made a fast-ugly script (i am not developer) with the module:

import sys
import requests
import cfscrape

sess = requests.session()
sess.mount("http://", cfscrape.CloudflareAdapter())
sess.get (sys.argv[1])

print "\"cf_clearance\"=\"%s\";\"__cfduid\"=\"%s\"" % (sess.cookies["cf_clearance"] , sess.cookies["__cfduid"])

Now, the sugar: We have to use the same user agent in nikto and cloudflare-scrape. Both of them permit to change the user-agent.

Now we run the script:


This cookie goes to the STATIC-COOKIE in the /etc/nikto/config.

and now, retry-time: Re-run nikto and you try to look at the "verbose" screen with the output of tcpdump:

<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>
<title>404 Not Found</title>

Yeah, challenge accepted and it works! ;)

Other example:; curl -s -s $SITE -A 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/34.0.1847.116 Chrome/34.0.1847.116 Safari/537.36' |grep title
 <title>Just a moment...</title>

Script (we need to clean the quotes here - different format to nikto config):

import sys
import requests
import cfscrape

sess = requests.session()
sess.mount("http://", cfscrape.CloudflareAdapter())
sess.get (sys.argv[1])

print "cf_clearance=%s;__cfduid=%s" % (sess.cookies["cf_clearance"] , sess.cookies["__cfduid"])

and...; curl --cookie `./ $SITE` -s $SITE -A 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/34.0.1847.116 Chrome/34.0.1847.116 Safari/537.36' |grep title


Cool?. You think about to make a proxy with this now.... yeah! very coooooool!

Wednesday, February 4, 2015



very crazy!


Thanks to everybody.

Tuesday, January 27, 2015

Fast access to the history host table of Netcraft site from terminal

Fast and ugly but it works. Useful for detecting old IPs and system OS. Netcraft history host table from terminal:

Netcraft uses javascript so I chose casperjs for the scraper:

Monday, January 19, 2015

pyV8 RPM for CentOS 6

First RPM package for the project pyV8. You will save all the compiling process ;)
I used the last revision in the svn today (r586).

It depends of the boost library but i have RPMS for that in the repo ;)

There we go:

yum install python-pyV8

Loading mirror speeds from cached hostfile
 * base:
 * epel:
 * extras:
 * updates:
Resolviendo dependencias
--> Ejecutando prueba de transacción
---> Package python-pyV8.x86_64 0:1.0-preview_r586svn.el6 will be instalado
--> Procesando dependencias: para el paquete: python-pyV8-1.0-preview_r586svn.el6.x86_64
--> Procesando dependencias: para el paquete: python-pyV8-1.0-preview_r586svn.el6.x86_64
--> Procesando dependencias: para el paquete: python-pyV8-1.0-preview_r586svn.el6.x86_64
--> Ejecutando prueba de transacción
---> Package libboost_python1_55_0.x86_64 0:1.55.0-1 will be instalado
--> Procesando dependencias: boost-license1_55_0 para el paquete: libboost_python1_55_0-1.55.0-1.x86_64
---> Package libboost_system1_55_0.x86_64 0:1.55.0-1 will be instalado
---> Package libboost_thread1_55_0.x86_64 0:1.55.0-1 will be instalado
--> Ejecutando prueba de transacción
---> Package boost-license1_55_0.x86_64 0:1.55.0-1 will be instalado
--> Resolución de dependencias finalizada

Dependencias resueltas

 Paquete                  Arquitectura
                                    Versión                      Repositorio                                    Tamaño
 python-pyV8              x86_64    1.0-preview_r586svn.el6      enetres                                         10 M
Instalando para las dependencias:
 boost-license1_55_0      x86_64    1.55.0-1                     enetres                                         39 k
 libboost_python1_55_0    x86_64    1.55.0-1                     enetres                                        130 k
 libboost_system1_55_0    x86_64    1.55.0-1                     enetres                                         40 k
 libboost_thread1_55_0    x86_64    1.55.0-1                     enetres                                         62 k

Resumen de la transacción
Instalar       5 Paquete(s)

Tamaño total: 11 M
Tamaño total de la descarga: 271 k
Tamaño instalado: 11 M
Está de acuerdo [s/N]:

As always, the pyV8 RPM package is in our repo:


Tuesday, January 13, 2015

nginx 1.7.6 RPM CentOS 6 + yaoweibin no_buffer patch + fancyindex

New nginx 1.7.6 RPM for CentOS 6 with fancyindex, yaoweibin no_buffer patch and all the modules:

nginx version: nginx/1.7.6
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) 
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/ --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --add-module=/home/dag/rpmbuild/SOURCES/ngx-fancyindex --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

Here: in the repo.


Friday, December 5, 2014

Ansible: Generating a SSH pub key file and uploading to other host to sync files from there

Update: check the comments for new workflows

Original from 19/May/2014... updated!

I found this workflow for our systems:

  1. Up the new box.
  2. Generate keys in that new box.
  3. "Fetch" the pub key from the new server to the ansible server.
  4. Copy that key to authorized_keys file of the other server (from ansible server).
  5. Execute a rsync from the new server without asking key to the other server.
My trick is:

Tuesday, November 11, 2014

memcached-zabbix-template monitor

This is a minimal template to get info from your Memcached server from two possible places. Via zabbix-agentd in clients or via externalscripts in zabbix server. Choose your option.

Monitoring information by now:
  • 'bytes', 
  • 'cmd_get', 
  • 'cmd_set', 
  • 'curr_items', 
  • 'curr_connections', 
  • 'limit_maxbytes', 
  • 'uptime', 
  • 'get_hits', 
  • 'get_misses', 

And the special HIT-ratio in %:
  • 'ratio' 

Installation in the Zabbix Server

You should look for the external scripts directory in your Zabbix Server configuration file. In the CentOS 6.5 RPM Zabbix installation is: /usr/lib/zabbix/externalscripts

Copy the python script there. A chmod/chown to get execution permission is necessary.

Now, in your Zabbix frontend: Configuration-Templates section, Import bottom in the right side.

Choose the XML file and import.

Apply this new template to your Memcached servers.

You don't need to modify the template if you are using the standard port to access to the Memcached (port 11211).

It permits a fast configuration because of you can apply the same template to all your memcached servers without modification/installation in the agents.

Of course, it can be to work in the agent/client side too.

You can find it in my repo: