Friday, August 3, 2018

Content-Security-Policy: frame-ancestors *

Content-Security-Policy: frame-ancestors *
is the header line that permits you to use any html, content or object as iframe or frame or whatever in any other website. It comes from the source websate. BUT, all your iframes won't work in the browser with file:// and you will get crazy trying to check what it is happening in your local tests.

Quick-hotfix: in linux or mac you can use, from the local path of your html, the famous python -m SimpleHTTPServer and try it with http://localhost:8000/<uripath>

Thursday, July 20, 2017

How to trigger a pipeline from other pipeline in Buildkite using JSON format.

if you are using Buildkite, you will find how to "trigger" a pipeline from other pipeline in this link.

All of the examples are in yml format and, I don't know why, it is impossible to find examples in JSON format.

After speaking to the Buildkite support, it is possible to "trigger" pipelines from JSON but it is hard to find the steps to follow.  In their support email they wrote a good information to follow:

You should be able to do something like this to trigger a build on the current pipeline:
$ cat my-pipeline.json
{
"steps": [
{ "trigger": "name-of-pipeline", "commit": "HEAD", "branch": "master" }
]
}
$ buildkite-agent pipeline upload my-pipeline.json

The equivalent YAML is:

$ cat my-pipeline.yml
steps:
trigger: "name-of-pipeline"
commit: "HEAD"
branch: "master"
$ buildkite-agent pipeline upload my-pipeline.yml

If you're running the V3 beta version of our agent, you can use environment variables directly within the pipeline so you can use them in your trigger steps. Here's a docs page about it: https://buildkite.com/docs/pipelines/trigger-step. It'd let you write your pipeline.json like this, and have it "just work"

$ cat my-pipeline.json
{
"steps": [
{ "trigger": "name-of-pipeline", "commit": "$BUILDKITE_COMMIT", "branch": "$BUILDKITE_BRANCH" }
]
}
$ buildkite-agent pipeline upload my-pipeline.json

Tuesday, June 27, 2017

Fixed Shodan API in the old TheHarvester

If you are an old-school man, I am sure you have worked with the old theHarverster to make some OSINT (or whatever they call now to extract/search data from different webpages).

Also, theHarvester is available in Kali Linux.

Well, the problem comes from Shodan. They updated their API and it breaks the theHarvester support of Shodan.

I wrote an issue and, finally, spent some minutes to fix it in this PR: https://github.com/laramies/theHarvester/pull/58

I don't know if laramies has deprecated the project.... my contribution is there...

Sunday, June 4, 2017

adsf in homebrew for MacOS

Well, if you don't know asdf, you should.

asdf is an extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more.... i am using it with terraform... super cool and very easy to use it.

They have a good README describing how to install it but they didn't have a brew formula to make it suuuperfast in MacOS.

Well, asdf is available directly in homebrew right now. I wrote a formula for asdf and it was merged:

$ brew info asdf
asdf: stable 0.3.0
Extendable version manager with support for Ruby, Node.js, Erlang & more
https://github.com/asdf-vm
/usr/local/Cellar/asdf/0.3.0 (1,740 files, 275.4MB) *
  Built from source on 2017-06-04 at 09:04:49
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/asdf.rb
==> Dependencies
Required: autoconf ✔, automake ✘, libtool ✔, coreutils ✔, libyaml ✔, openssl ✔, readline ✔, unixodbc ✔
==> Caveats
Add the following line to your bash profile (e.g. ~/.bashrc, ~/.profile, or ~/.bash_profile)
     source /usr/local/opt/asdf/asdf.sh
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d 

Wednesday, September 14, 2016

Mirroring my old RPM CentOS6 REPO

As you know, we made some interesting RPM packages while I was an employee of Enetres.

Obviously, I lost the access to the repo when I left that company and, after 8 years working on RPMs, in my new position, we are not working on RPM packages anymore. Just in case, I made again all the pkgs in a new repo preserving versions. I don't know what strategy will have my last company in the future so I would like to preserve that work for the people who is using legacy or deprecated software without any other possible option to choose.

Bintray seems to have a good and free account. I made 256 CentOS6 packages again and the configuration steps for this mirror are easy:

Run the following to get a generated .repo file:
  1. wget https://bintray.com/vicendominguez/CentOS6/rpm -O /etc/yum.repos.d/bintray-vicendominguez-CentOS6.repo
    or - Copy this text into a 'bintray-vicendominguez-CentOS6.repo' file on your Linux machine:
    #bintraybintray-vicendominguez-CentOS6 - packages by vicendominguez from Bintray [bintraybintray-vicendominguez-CentOS6] name=bintray-vicendominguez-CentOS6 baseurl=https://dl.bintray.com/vicendominguez/CentOS6 gpgcheck=0 repo_gpgcheck=0 enabled=1
  2. sudo mv bintray-vicendominguez-CentOS6.repo /etc/yum.repos.d/

If I have to make a new version of this packages... I will bump it here, in this one.

Thx...

Sunday, July 3, 2016

Helper script to multiupload RPM files to Bintray repo (and remove packages too)

If you have a big local rpm repo and you would like to upload all the packages to Bintray, I  have forked and "have improved" the hgomez shell scripts.

The "upload shell script" is updated to the current Bintray API and i have improved the output.

The "delete shell script" is just to remove version of packages from Bintray which you have in the local storage but you didn't want in Bintray (and they were uploaded previously). It is a proof of concept for the API, nothing else.

Two details:
  • In the delete-script, I chose to use $1 parameter to point to the files (with wildcards).
  • In the upload-script, RPMS_DIRS is used in the upload curl url, so be careful here. We are using relative path to avoid a long path in the Bintray web.



Wednesday, May 4, 2016

RPM GPAC 0.6.1 for CentOS6

Our own version of GPAC is ready. Version 0.6.1 without any X dependencies.

gpac-0.6.1-1_noX.el6.x86_64

Four binaries:

[root@core ~]# rpm -ql gpac |grep bin
/usr/bin/DashCast
/usr/bin/MP42TS
/usr/bin/MP4Box
/usr/bin/MP4Client


Compilation flags:

[root@core ~]# MP4Box -version
MP4Box - GPAC version 0.6.1-revrelease
GPAC Copyright (c) Telecom ParisTech 2000-2012
GPAC Configuration: --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --extra-cflags=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -DPIC -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D_LARGEFILE_SOURCE=1 -D_GNU_SOURCE=1 --enable-debug --libdir=lib64 --disable-oss-audio --disable-x11 --disable-static --use-js=no
Features: GPAC_64_BITS GPAC_HAS_SSL GPAC_HAS_JPEG GPAC_HAS_PNG 


You'll find it in my bintray repo as always: https://bintray.com/vicendominguez/CentOS6/gpac

:)





Thursday, April 21, 2016

git commit and the error: There was a problem with the editor 'vi'.

Error message:

error: There was a problem with the editor 'vi'.
Please supply the message using either -m or -F option.


Quick solution:

git config --global core.editor $(which vim)

Short-reason: If you are using vundler, or similar, you could find this error.
Issue: https://github.com/VundleVim/Vundle.vim/issues/167

Monday, April 11, 2016

'include:' statement in Ansible does not find the correct path running on Vagrant

What is the issue?

You have a include statement inside a role but when the playbook is running on Vagrant, that path doesn't exist!!

Tree:

├── playbook.yml
└── roles
    ├── base
    │   ├── README.md
    │   ├── defaults
    │   │   └── main.yml
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   ├── main.yml
    │   │   ├── setup_debian.yml
    │   │   └── setup_rhel.yml
    │   ├── templates
    │   └── vars
    │       └── main.yml


In tasks/main.yml we have:

- name: Trying to update base installation on RedHat
  include: setup_rhel.yml
  when: ansible_os_family == 'RedHat'

- name: Trying to update base installation on Debian
  include: setup_debian.yml
  when: ansible_os_family == 'Debian'


If you run the playbook.yml in a Vagrant box you could see this error message:

FAILED! => {"failed": true, "reason": "the file_name '/Users/vicente/vagrant/ansible/playbooks/setup_debian.yml' does not exist, or is not readable"}

Well, this is the new tasks/main.yml to solve it:

---
# Include OS-specific installation tasks.
- name: Trying to update base installation on RedHat
  include: "{{ role_path }}/tasks/setup_rhel.yml"
  when: ansible_os_family == 'RedHat'

- name: Trying to update base installation on Debian
  include: "{{ role_path }}/tasks/setup_debian.yml"
  when: ansible_os_family == 'Debian
'

:)



Friday, March 11, 2016

Magic Reboot - Emergency Reboot

Hi guys!

Context:  I want to reboot but.....

# reboot bash: /sbin/reboot: Input/output error 
# shutdown -r now bash: /sbin/shutdown: Input/output error

Solution:

echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger

 

Friday, February 5, 2016

ffmpeg v2.8.6 + libx265 + libfdk-aac RPM for CentOS 6

I have created some RPMs from the newest version (v2 branch) of ffmpeg tool for CentOS 6:


[root@core ~]# ffmpeg
ffmpeg version 2.8.6 Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
  configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --disable-static --enable-runtime-cpudetect --enable-gpl --enable-version3 --enable-postproc --enable-avfilter --enable-pthreads --enable-x11grab --enable-vdpau --disable-avisynth --enable-libdc1394 --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-bzlib --enable-libass --enable-libdc1394 --enable-libfreetype --enable-openal --enable-libopus --enable-libpulse --enable-libv4l2 --disable-debug --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --disable-stripping --extra-libs=-lstdc++ --enable-libfdk-aac --enable-nonfree
  libavutil      54. 31.100 / 54. 31.100
  libavcodec     56. 60.100 / 56. 60.100
  libavformat    56. 40.101 / 56. 40.101
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 40.101 /  5. 40.101
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  2.101 /  1.  2.101
  libpostproc    53.  3.100 / 53.  3.100
Hyper fast Audio and Video encoder

Yes! I added x265 and libfdk-aac (used in Android) :)))

As always, available in the repo: https://bintray.com/vicendominguez/CentOS6/ffmpeg




Tuesday, January 12, 2016

Autoregistration of Raspberries/Servers with the same hostname on Zabbix

Possible project


  • At least, 200x Raspberries in different locations 
  • We can have two or three Raspberries in the same location.
  • Firewall in all locations
  • No starting date known
  • Instalallation from the same image so the hostname is the same in all of them.
  • Nobody can log in

Question


 How to monitor those Raspberries with Zabbix?

Key


Zabbix (2.2) uses the hostname as key field. It's not possible to repeat that hostname in the server so, how to proceed?

Workflow



This is my workflow:

In zabbix-agent:
    • Configure agent as active, disable passive (firewall installed, do you remember it?)
    • I need to create a random hostname but i need to identify each Raspberry in the same/different locations. So i created this python script to get a hostname (i chose publicip-macaddress name):



I had to change the ":" in the macaddr to "_" because ":" is not valid char for the hostname field.

In the agent config you will need to run this script, two keys here:
EnableRemoteCommands=1
HostnameItem=system.run["/usr/local/bin/get_macname.py"]
You will need to put a Meta for the autoregister:
HostMetadata=RaspberryCol
The agent is ready. 

For the server you will need to clone a OS Linux template to convert it as Active. You will need to change the Type item by item in the cloned template:



Now the new autoregistration rule.

Create a new Action and:



And:


That's all.

Thursday, December 17, 2015

hubot + hangups + dokuwiki + zabbix

Fast post!

I have integrated dokuwiki and zabbix with hubot using hanhgups. This is a hell because you need to make a concrete calls with the hangups adapter.

I am using hungups api REST in hubot to receive the notifications. So we need to integrate that calls.



My shellscript template:



My Action:


Monday, November 16, 2015

Mydumper RPM v0.9.1 for CentOS 6.x (x86_64)

A week ago Mydumper jumped to the 0.9.1 with a lot of improvements. In the Percona blog we can read:

A significant change included in this version now enables Mydumper to handle all schema objects!!  So there is no longer a dependency on using mysqldump to ensure complex schemas are backed up alongside the data.
Let’s review some of the new features:
Full schema support for Mydumper/Myloader
Mydumper now takes care of backing up the schema, including Views and Merged tables. As a result, we now have these new associated options:
-d, --no-data Do not dump table data
-G, --triggers Dump triggers
-E, --events Dump events
-R, --routines Dump stored procedures and functions
These options are not enabled by default to keep backward compatibility with actual mixed solutions using Mysqldump for DDLs.
Locking reduce options
--trx-consistency-only      Transactional consistency only
You can think on this as --single-transaction for mysqldump, but still with binlog position. Obviously this position only applies to transactional tables (TokuDB included).  One of the advantages of using this option is that the global read lock is only held for the threads coordination, so it’s released as soon as the transactions are started.
GTIDs and Multisource Slave 
GTIDs are now recorded on the metadata file.  Also Mydumper is now able to detect a multisource slave (MariaDB 10.1.x) and will record all the slaves coordinates.
Myloader single database restore
Until now the only option was to copy the database files to a different directory and restore from it. However, we now have a new option available:
-s, --source-db                   Database to restore
It can be used also in combination with -B, --database to restore to a different database name.

As always, we have created a x86_64 RPM version for Centos 6.x in our repo:

Press  "set me up!" there to configure the repository




Thursday, September 17, 2015

atrpms is dead and i need ffmpeg for CentOS 6

Berlin university seems to switch off the server running the atrpms repo. That is not official but after 10 days down it is easy to think it.

I have read some post in the CentOS list about this issue. In some answers,  people recommends to choose alternative repos. That is a crap. The recommendations are pointed to choose ffmpeg v0.10.x or similar.... oh come on! the last ffmpeg version today is 2.8.x ... is that serious?

In our infrastructure ffmpeg 2.2.x is valid for us, it is the atrmps version. So I have cloned, in my bintray Centos 6.x repo, the necessary packages (from atrmps) to install ffmpeg 2.2.x and mediainfo tool (with dependencies).

It you are in troubles, you can use our repo for that version install (note1: remember we have strong dependency from EPEL repo, note2: these packages are not official and they don't have any support).

Good luck! :)

Tuesday, September 8, 2015

Fast install on CentOS 6 for the old smokeping 2.6.8

As always i prefer the fast way....

When I was trying to install smokeping to make some tests to get latencies i found i had to compile it from sources and install some weird CPAN(perl) dependencies etc etc etc.... horrible to make a 5min-test.

So I "borrowed" some packages from here and from there..... and finally we have all the packages to make a "yum install smokeping" without problems.

This is the old 2.6.8 version so, mainly, forget ipv6 support.

As always,  for CentOS 6, at enetres repo.


Friday, June 26, 2015

varnish-vmod-geoip RPM package for Varnish > 4.0.1 CentOS 6

This Varnish module exports functions to look up GeoIP country codes and requires GeoIP library.

Module config and info here: https://github.com/varnish/libvmod-geoip

Updated!

:)

Monday, May 11, 2015

Ansible + Linode API + CentOS

Fast Mode ON! If you dont understand anything.... try to ask it in the comments.

Requirements for CentOS:
  • yum install pip
  • pip install linode-python
  • pip install chube

Template:


This values are from the API:

plan: 1           #cheapest 
datacenter: 6     #newmark NJ
distribution: 127 #centos 6.5

There are different values and you will need to ask them to the API so, to see full info of these three from Linode API (distributions IDs, datacenters and plans), you can run this nodejs script:


Dont forget the sudo npm install linode-api -g

:)

Fast Mode OFF!

Monday, April 27, 2015

Forcing ansible playbooks to concrete hosts (and vagrant version)

This is a fast workaround to force to run a playbook to a concrete host.
Important: You must to have the host added to the ansible host inventory.

You will need to convert hosts to a variable. From:

- name: Installing base server template
  hosts: all
  gather_facts: true
  roles:
   - base
   - ntpenabled


To:

- name: Installing base server template
  hosts: '{{ hosts }}'
  gather_facts: true
  roles:
   - base
   - ntpenabled

And now, in terminal for running the playbook:

ansible-playbook <playbook.yml> --extra-vars="hosts=<ip_or_hostname_here>"


and for vagrant:

  config.vm.define "test" do |test|
     test.vm.box = "chef/centos-6.6"
     test.vm.network "private_network", ip: "10.1.1.13"
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/base.yml"
       ansible.sudo = true
       ansible.extra_vars = {
          hosts: "ip_or_hostname_here"
       }
     end
  end

Tuesday, April 21, 2015

Ansible + Vagrant: forget your interactive prompts (SOLVED)

If you have a playbook with something like this:

- name: Installing test box
  hosts: all   
  connection: paramiko
  vars_prompt:
     - name: "hosthname"
       hosthname: "Give me a hostname:"
       private: no
  gather_facts: true
  roles:
   - base
   - redisenabled
   - nodebase


 And you are trying to run it with vagrant following this Vagrantfile piece:

  config.vm.define "test" do |test|
     test.vm.box = "chef/centos-6.6"
     test.vm.network "private_network", ip: "10.1.1.13"
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/test.yml"
       ansible.sudo = true
     end
  end

This var (hosthname) is not interactive in Vagrant, you never will be asked.

What is the trick? I tried this workaround and i liked it:

  • Just in case i would create a default value for the variable.
  • Force the value of the variable in the Vagrantfile

So, the final config files would be:
  • Playbook:
- name: Installing test box
  hosts: all   
  connection: paramiko
  vars_prompt:
     - name: "hosthname"
       hosthname: "Give me a hostname:"
       private: no
       default: "test01-default"
  gather_facts: true
  roles:
   - base
   - redisenabled
   - nodebase

  • Vagrantfile
  config.vm.define "test" do |test|
     test.vm.box = "chef/centos-6.6"
     test.vm.network "private_network", ip: "10.1.1.13"
     test.vm.provision "ansible" do |ansible|
       ansible.playbook = "ansible/playbooks/test.yml"
       ansible.sudo = true
       ansible.extra_vars = {
          hosthname: "test01"
       }
     end
  end



Wednesday, April 1, 2015

Boost C++ library RPM packages for CentOS 6

I have created some RPM packages from Boost C++ libraries, 1.54.0-8.20.2, 1.55.0,  1.56.0 1.57.0 1.58.0 and 1.59.0 for CentOS x64 (no 32bits sorry).

Building the Boost C++ Libraries with:

Performing configuration checks

    - 32-bit                   : no
    - 64-bit                   : yes
    - arm                      : no
    - mips1                    : no
    - power                    : no
    - sparc                    : no
    - x86                      : yes
    - lockfree boost::atomic_flag : yes
    - has_icu builds           : yes
warning: Graph library does not contain MPI-based parallel components.
note: to enable them, add "using mpi ;" to your user-config.jam
    - zlib                     : yes
    - iconv (libc)             : yes
    - icu                      : yes
    - compiler-supports-ssse3  : yes
    - compiler-supports-avx2   : no
    - gcc visibility           : yes
    - long double support      : yes
    - zlib                     : yes

Component configuration:

    - atomic                   : building
    - chrono                   : building
    - container                : building
    - context                  : building
    - coroutine                : building
    - date_time                : building
    - exception                : building
    - filesystem               : building
    - graph                    : building
    - graph_parallel           : building
    - iostreams                : building
    - locale                   : building
    - log                      : building
    - math                     : building
    - mpi                      : not building
    - program_options          : building
    - python                   : building
    - random                   : building
    - regex                    : building
    - serialization            : building
    - signals                  : building
    - system                   : building
    - test                     : building
    - thread                   : building
    - timer                    : building
    - wave                     : building


Easy to add:
sudo wget https://bintray.com/vicendominguez/CentOS6/rpm -O /etc/yum.repos.d/bintray-vicendominguez-CentOS6.repo
sudo yum install boost-devel

:)

Tuesday, March 3, 2015

ping to multiple hosts at the same time with fping

How to ping to multiple hosts with fping  showing what hosts are between 200ms to 999ms of latency to detect hosts with network issues in the LAN?:


IPVD.txt is a file with the IP list. No limits. I was using 400 IPs.
the "watch" tool to check the results periodically is cool.

:)

Wednesday, February 18, 2015

supervisord in CentOS 7 (systemd version)

Hello,

Fast installation in CentOS 7 for this "helper" to the queues service in laravel or django framework. EPEL package too old so:
  1. yum install python-setuptools python-pip
  2. pip install supervisor
  3. mkdir -p /etc/supervisord
  4. echo_supervisord_conf > /etc/supervisor.d/supervisord.conf
  5. forked systemd init script  (thx to Jiangge Zhang) in /usr/lib/systemd/system/supervisord.service:


And: 
  1. systemctl enable supervisord
  2. systemctl start supervisord

User=nginx is useful to run this process as nginx user. You can change it but the user must be in the system.

Wednesday, February 4, 2015

FOSDEM 2015

F.O.S.D.E.M 2.0.1.5

very crazy!




:-O

Thanks to everybody.