List of IP addresses

The project is a “search engine for Internet-connected devices” that scans the internet for alive hosts, open ports and services, and publishes the results on its web pages.

It is an interesting project for general network and security research. If you are a network manager, it’s a nice way of knowing what your own network looks like from the outside to observers and evil attackers. Moreover, scans are frequent and you will be able to spot new devices on your network quickly.

It makes little sense to block scans as a security measure, because security by obscurity does not work. Moreover, you are blocking only the most casual attackers and researchers that use; clever, determined actors won’t be deterred just because there’s nothing on about your networks.

Still, there are situations where you want to block from scanning your network. In this case, make sure you have an alternative way of monitoring your network from the internet, and get warnings about interesting changes. does network scanning from a set of  IP addresses which all resolve to * names. Names and IPs change regularly.

Most of the IPs can be obtained from the direct resolution of the names, but some IP only reverse-resolve, i.e. the domain name points to another IP.

It seems that the people at want to make theis scanners easily recognizable, but don’t want to make it supereasy to block them.

I collect a list of domains and IPs that have scanned the networks I manage over the time. It contains IPs directly resolved from the * domain names, plus IPs that reverse-resolve into it. The list is updated daily, I make new additions as it happens.

The list is here:


Recidives, wordpress and fail2ban

This is what happens when you configure fail2ban to ban recidives one week:

fail2ban-weekThe blue line is a wave of IP addresses probing my sites for a wordpress vulnerability and triggering wordpress-hard, the yellow line represents recidive addresses (hosts blocked more than 5 times over a week, blocked for a week).


How to stop wordpress bruteforce logins with fail2ban


A bruteforce botnet targeting a wordpress site

Tor: using bridges and pluggable transport on the command line

If you use tor browser and your ISP blocks Tor, you can easily use bridges within tor browser. You can even use a pluggable transport such as obfs4proxy to avoid detection from a network observer at the application level.

If you are running tor on the command line, for example on a linux server, you can still use bridges and pluggable transport:

First, get some bridges

Second, make sure you installed tor and obfs4proxy;

Then, edit /etc/tor/torrc and add UseBridges, ClientTransportPlugin and Bridge lines:

UseBridges 1
ClientTransportPlugin obfs4 exec /usr/bin/obfs4proxy
Bridge obfs4 ABCDEFG cert=123456789abcdefg iat-mode=0

If you don’t need obfs4proxy, you can just use a bridge line like:

UseBridges 1
Bridge 123456ABCDEF

You can put several bridge lines, in case one does not work.

After you start tor and make sure it works, you can use it with torify command or whatever you like.

A bruteforce botnet targeting a wordpress site

One of my wordpress sites has been hit by an organized brute-force login attack originating from a botnet. Since I use fail2ban for wordpress logs with 1 hour bantine, it was interesting observing the bots coming one at a time, triggering a ban almost immediately, and being followed by the next IP immediately, again and again. The attack lasted about one hour and featured about 120 distinct IPs. You can see the spike in the fail2ban graph:

fail2ban-monthThe usernames were the evergreen admin, domain and domaintld.

Update 20160927: more waves of bruteforce logins coming from dozens of different IPs. The real number of IPs is about 1/3 higher, due to the munin graph compression.


How to stop wordpress bruteforce logins with fail2ban

If you use wordpress and you enjoy reading the webserver logs like me, you see many failed login attempts produced by bots trying to guess common passwords for the accounts on your blog. This is bad because:

  • if one user has a weak or unsafe password, it will be compromised and your blog probably owned;
  • every user enumeration and password guess uses wp-login.php bypassing any caching that you may have, wasting resources that your server may well use to serve content;

I have tried a handful of plugins to limit login attempts, but they all work blocking brute force attempts at the php level.

If you have a dedicated server o a VPS you can use WP fail2ban plugin to write failed login attempts to /var/log/auth.log, then install and configure fail2ban to block offending IPs via iptables. The plugin documentation is straightforward and you should follow the recommended settings.

The key advantage of an iptables approach is that the overhead of blocking brute force bots is very low compared to PHP. The downside is that IP blocking it not very flexible if you really have lots of real users from real networks, so use it if you know what you are doing.

I have found the following settings specially effective:

  • raise the bantime from 300 seconds to something higher, like 3600 (1 hour). This has decreased the number of blocked IPs overall.
  • rename the admin account and set WP_FAIL2BAN_BLOCKED_USERS in wp-config.php accordingly. This triggers wordpress-hard and blocks bots instantly.
  • disable IPv6 on your domains. Unfortunately, fail2ban does not yet support IPv6 (version 0.10 will) and IPv6 bots do exist. Forcing them on IPv4 will let fail2ban block the attacks.

Don’t forget to configure fail2ban properly: to avoid banning your own server’s IP address, add the corresponding entry to the ignoreip line in your jail configuration. Otherwise you risk blocking wp-cron and having “missed schedule” errors in your blog.

Revamping an old MacBook with RAM and SSD

I worked recently on a late 2009  13 inch Apple MacBook (Model A1342, 2.26 Ghz) which was still in good working conditions.  It’s a Intel Core 2 Duo laptop with an 120GB hard disk and 1GB RAM, Mac OSX 10.5. With a little upgrade it’s a decent computer for everyday office work in 2014.

The hardware part is straightforward: I bought an 250 GB SSD from Samsung and 4 GB RAM DDR2 667MHz, took out the battery pack and removed the lateral metal rail to access the RAM slots and the hard drive bay. You need a cross and a torx screw points to remove the rail and free the drive from its cradle.

The software part takes a bit longer: I set for Mac OSX 10.6 Snow Leopard which is stable, tested and actively supported. It is possible to import all the old data and settings from a time machine backup or an osx drive during the new installation, just plugging it via USB.

The issue about an SSD drive is that it’s very fast (allowing the SATA bus onboard is not a bottleneck) but its lifespan depens on the read/write cycles it endures. All the optimization recommended for SSD tries to reduce read/writes by tweaking the filesystem and the OS.

The reference for OSX SSD optimization is Martin Matula’s Optimizing MacOS X Lion for SSD which is about OSX 10.7 Lion: all the steps described there are similar to what you do on other Linux/Unix systems and should be adapted to your situation, except the step “Turn off local Time Machine snapshots” using tmutil in the terminal, because tmutil does not exist in 10.6 and local snapshots seem to be a new feature in 10.7.

Since an SSD will fail sooner or later, it’s important to backup. Time Machine on an external drive is the easiest way to do it, however there’s a subtle catch when using File Vault home directory encryption. Time Machine does an encrypted home directory’s backup only when the user is logged off. This is seldom the case on a single-user laptop that is either turned off or logged-in.

It seems that in this case you have to trade off security (home directory encryption with File Vault) with data availability (frequently updated backups). Depending on your risk assessment you may decide to keep data encryption and do manual backups using some other method (such as rsync), or automate backups leaving your data in clear. Think about the risk of having the laptop stolen and your data accessed, vs the risk of an hardware failure and your data lost and activate/disable file Vault accordingly. In either case you need good backups.

Another migration from Movabletype to WordPress

I had to move a very old Movabletype site over to a WordPress install. This is the migration log, for future reference.

The original site was a MT 2.5 install from 2002/2003 that has been running until now, literally untouched since a major server move in 2006, with new posts being added daily and a grand total of 5000 post, mostly medium- and long-form essays and fiction writings.

The site suffered from severe underperformance issues even under moderate load, both caused by its outdated server and by MT’s code inability to cope with a modern web environment. Comments were disabled but trackbacks were not, resulting in huge amounts of pingback spam. No plugins and no customizations other than the site’s own graphic template.

First, I refreshed my memories from 2005 when I migrated a similar site. Then, I discovered this post by David B. Bitton that shows a new way of importing data while preserving old permalinks and SEO. I’ll follow David’s steps, here’s how:

Do a backup of database and files (cgi-bin and document root) and replicate the install on my laptop so that I can set config/memory limits as I please. Logging in the local MT site (let’s call it “local1”) change the “blog config” parameters to reflect the local settings, then rebuilt the site in order to visually check the installation and use it for reference (for the original style, color palette, exact content).

After that, clone “local1” into “local2” (mysqldump, new db, restore db, copy files, change mt.cfg settings from local1 to local2). “local1” will be the reference old installation, “local2” will be the actual upgrade installation.

The goal here is to upgrade to a Movabletype version where I can do an XML “backup” of the blog, as opposed to a MT “export” in text format. The key difference is that the XML backup keeps the entry IDs of every post, that will be used later in the WordPress import. This feature was first introduced in MT 4.x MT (thanks Mihai Bocsaru in the MT forums for the information). I decided to upgrade straigth to 4.x without hopping through several intermediate upgrades 2.x 3.x 4.x and it worked, but if you have a more complex configuration, comments, plugins and customizations you may prefer the long upgrade path.

Download MT 4.37 (the earliest 4.x available) from Read the documentation on upgrades from the tarball’s docs/mtupgrade.html. The two key issues are that mt-db-pass.cgi is deprecated and mt.cfg has moved to mt-config.cgi.

Go to “local2” cgi-bin directory, copy mt.cfg to mt-config.cgi, and add a DBPassword “yourdbpassword” statement. Then set execution permissions to mt-config.cgi. Copy the MT 4.37 files over the old installation. Go to localhost/cgi-bin/mt.cgi (or where you installed local2): an upgrade page will walk you through the upgrade process, with a nice progress bar and a helpful upgrade log. I had to check apache2 error.log several times because of missing static files (javascript and such) that I forgot to move to the proper place.

When the upgrade is completed login in the new “local2” site and check that everything is fine. It is not, actually, because somewhere between MT 2.5 and 4.3 the character encoding has switched from Latin1 to UTF-8 and all accented letters are garbled. Ignore this issue for the time being (it’s not well documented, MT is old software with a long complicated proprietary/open history).

Go to Tools>Backup in your MT dashboard. Do an uncompressed undivided backup. The output is an XML file of all your blog contents. The original MT2.5 database was 100 MB, the XML is just a little above that. Open the XML file with a text editor: in my case half of the contents were pingback/trackback spam which I deleted, final size was 50MB.

(The pingback spam could have beed deleted from the original database via SQL in the first place, but I was not familiar with MT2.5 database. Deleting spam from the db before upgrading is going to make the following process faster.)

Then convert the character encoding of the XML backup file with iconv:
iconv -f ISO-8859-1 -t UTF-8 file > newfile
It still contained illegal characters to be replaced with their UTF-8 equivalents or their XML escape entities, especially apostrophes in the entry titles. I used Firefox to check and validate the XML file every time. (A possible explanation for this character havoc is the accumulation of thousand of posts by dozens of authors over a decade: different word processors, operating systems, writing habits: nice mess!)

From now follow David B. Bitton’s post. I made a local WordPress installation, add the Movable Type Backup Importer plugin and add just after line 407 of class-mt-backup-import.php:
$post->import_id = $id;

Import the XML file and wait, it takes some time for 5000 posts.

Then set the permalink structure, I use the year/month/day/post-title style and slightly different rewrite rules. The key is to redirect existing incoming links to the old permalink structure to WordPress default short (numeric) URLs and redirect the feed subscribers. Here’s the .htaccess (snippet):

RewriteEngine On
RewriteRule ^archives/[0-9]{4}/[0-9]{2}/0*(\d+).html$ /?p=$1 [R=301,NC,L]
RewriteRule ^archives/[0-9]{4}/[0-9]{2}/0*(\d+)print.html$ /?p=$1 [R=301,NC,L]
RewriteRule ^archives/([0-9]{4})_([0-9]{2}).html$ /$1/$2 [R=301,NC,L]
RewriteRule ^archives/cat_([a-z_]*).html$ /categorie/$1 [R=301,NC,L]
RewriteRule ^archives.html$ / [R=301,NC,L]
RewriteRule ^index.rdf$ /feed [R=301,NC,L]
RewriteRule ^index.rss$ /feed [R=301,NC,L]
RewriteRule ^index.xml$ /feed [R=301,NC,L]
RewriteRule ^atom.xml$ /feed [R=301,NC,L]

# BEGIN WordPress

Edit: there are a couple more rules I added depending on the original permalink structure in the old site. I found out checking the apache logs for 404 not found errors.

Now I have a working localhost site with all the original old site content. Check that nothing is missing, verify the users/passwords. Check the old MT media directory (usually /archives) and delete all .xml files (mostly trackback spam), html files (single posts, category archives, date archives) and date directories.

Eventually, move it to a production server (I prefer editing a database dump for this purpose) and enjoy. Keep an eye on the web server logs for 404s, adjust the htaccess rules as needed.

Soluzione al baco grafico in Ubuntu Lucid Lynx

Intel Corporation 82852/855GM Integrated Graphics Device [8086:3582] (rev 02)

Se aggiorni a Ubuntu 10.04 in un pc con questa scheda grafica, X si blocca rendendo inutilizzabile il tutto. Rimane solo un avvio in modalità provvisoria e terminale di root. Delle tante soluzioni elencate qui, vai direttamente al Workaround D: Use a kernel other than 2.6.32 ed installa un kernel 2.6.34 che durerà fintanto che non arriva la prossima 10.10.

Non commento sulla scelta di Ubuntu di far uscire una LTS con un baco così grosso, anche se di rilievo solo per pc vecchiotti.