SOLVED: Sharing TweetDeck settings across multiple Windows and Linux machines

Tags: , , , , ,

Dropbox synchronizationI’ve been using TweetDeck for quite some time on Linux, after I managed to get getting Twhirl working on Linux under 64-bit Adobe Air.

TweetDeck is a lovely app, graceful and very useful. It has its minor visual and UI bugs, but it’s the best I’ve seen out of the other hundreds of Twitter apps out there… and it’s 100% free.

I have 3 laptops I use on a regular basis running both Windows and Linux. I’m not always on the Linux laptop, but I wanted to make sure that my TweetDeck settings on my Windows laptops were identical to the ones on my Linux laptop, including all of my searches, columns and other settings. I plug one of my Windows laptops into my television, so I can use the larger screen as my monitor (see below).

Read the rest of this entry »

Techniques for slowing down/stopping external attacks on your Apache server

Tags: , , ,

Apache Foundation logoI’ve been running an Apache server for over a decade, serving up hundreds of websites over the years, and one thing remains constant: abusers attacking Apache, looking for a way in, or a way to DDoS attack your server so others can’t get to the content you’re providing.

We don’t call these people ‘hackers‘, ‘crackers’ nor do we even call them ‘criminals’. They’re just idiots, and they’re easily stopped.

The rest of this post will show quite a few ways to slow or stop these attackers from taking down your Apache web server or abusing it in any way.

Read the rest of this entry »

How to Become a High-Tech Minimalist

Tags: , , , , , , , , , ,

clean, clear and simpleThis will be the first in a series of posts I’ll write about going minimal as a technologist in today’s world.

The mere mention of the word “minimalist” or “minimalism” to most people means “getting rid of luxuries and convenience”, and going back to basics. The former is just a myth, but the latter is really the goal. Everyone can get by with a lot less “stuff” in their lives, but what remains can certainly be very convenient and still remain current, “cool” and functional.

Being a high-tech minimalist means reducing what you have, but not necessarily spending less to achieve that goal. To achieve the goal of reducing the amount of things in your life, you may have to spend more, to get less, so you can ultimately spend less in the future.

There are certainly extremists in this field, who want to try to get their lives down to zero-impact, zero-waste, zero consumption, but I am not personally on that side of the dial.

My life is surrounded by ones and zeros. Lots and lots of them. I have a lot of high-tech gear at my fingertips at any one time. This is my digital life. Multiple laptops, servers and dozens of chargers and cables are all jacked in at any one time in my life, not including my office at work and its various sundry items.

But I also have my analog life, which includes archives of paperwork going back 10-15 years. Boxes and file cabinets of paperwork, files, documents, articles, magazines, books and other material that I’ve needed to capture or save over the years.

As I move to the next stage of my life, I’m looking very hard at everything I own, everything I use, and making a very binary decision:

  1. Keep it (because I need it or use it on a regular basis)
  2. Let it go (because I no longer need it, use it, or have replaced it with something better)

There is no third option.

I’m approaching this new lifestyle change because frankly, I have too much stuff.

Stuff leads to clutter.
Clutter leads to chaos.
Chaos leads to living a confusing, unfocused life.

I need to reduce the complexity of my life, by reducing the clutter and chaos within it.

Read the rest of this entry »

There is No Anonymity with that Torrent

Tags:

I’ve been running a public BitTorrent tracker for about 7 years for several of the Open Source projects I host (Plucker, J-Pilot, pilot-link).

People ask me all the time in private email, how they can be “completely anonymous” when torrenting. I can only assume they want to share some copyrighted material with their torrent client, and don’t want the MPAA or RIAA chasing them down.

The quick and dirty answer is: you can’t!

Azureus Peers List

There are plenty of tools out there that let you lock down your torrent client, block domains, even block an entire country, but your IP and connection state are still shared across all peers you’re sharing with, or downloading data from.

Even tools like Tor can’t be used for this, because you never know who runs the exit nodes, and that is where your actual IP address is exposed. You can’t trust those endpoints.

What this means is, you can block all of the peers emanating from within US network and netblocks, and only allow connections from non-US peers, but those non-US peers are probably allowing connections from the same US peers you’re blocking.

Let me explain:

  1. You block all US peers using SafePeer, PeerGuardian, MoBlock or other tools.
  2. You connect to a peer in Romania using your “trusted” BitTorrent client (such as Vuze [formerly Azureus])
  3. Romania peer connects to some US peers (possibly those running on RIAA or MPAA harvesting hosts)
  4. Your IP and connection state have just been exposed to those US hosts you’re trying to block

There are ways to attempt to anonymize your traffic and connection state from the tracker (the main point of leakage, and the primary target of the MPAA/RIAA), but it requires that you understand and implement technologies like I2P, and configure them appropriately, end-to-end.

“I2P is an anonymizing network, offering a simple layer that identity-sensitive applications can use to securely communicate. All data is wrapped with several layers of encryption, and the network is both distributed and dynamic, with no trusted parties.”

I’ve been toying with i2p lately as a means of securing some internal IRC chat servers that I run. It’s a bit slower, but it does do the job, and does it very well.

I don’t personally need to ride BitTorrent behind the i2p network, but plenty of others are doing exactly that with i2p.

i2p is a bit earlier in the game of creating free, anonymous network traffic, and others have come before it that provide more flexibility and a more-distributed network (like Freenet), but it is maturing fast, and is very capable.

The main thing Freenet provides that i2p does not (at this point), is distributed data storage. However, the i2p developers are working on that [i2p] [http] (warning: the i2p URL won’t work unless you have your i2p proxy and tunnels configured correctly).

Just keep in mind, if you want to “hide” yourself, you need to use an entirely new network, one that relies on de-centralized peers, who do NOT trust each other, and the entire network has to use encryption at every possible turn, to ensure nothing is peeked, sniffed or re-transmitted.

p.s.: If you must, use iMule or these instructions for i2p-enabling Azureus

Snapshot backups of EVERYTHING using rsync (including Windows!)

Tags: , , , , , , , ,

Just a bunch of disksLet me just start by saying that I have a lot of data. In multiple places. Some on laptops, some on servers, some on removable drives and mirrored hard disks sitting in a bank vault (yes, really). Lots of data on lots of systems in different states and locations: client data, personal data, work data, community data and lots more.

Over the years, I’ve tried my best to unify where that data is sourced from, for example by relocating the standard “My Documents” location on all of my Windows machines (physical and virtual), to point to a Samba share that is served up by a GELI-encrypted volume on my FreeBSD or Linux servers. That part works well, so far, but that’s only a small piece of the larger puzzle.

Over the last decade, the amount of data I’m holding and responsible for managing has grown significantly, and I needed a better way to manage it all.

There are plenty of backup solutions for Linux including the popular Amanda and Bacula, but I needed something more portable, leaner and much more efficient. That quest led me to look to find Unison mostly due to it’s cross-platform support, but it was still a bit more complicated than I needed.

So I kept looking and eventually found rsnapshot, a Perl-based tool wrapped around the standard rsync utility written by Andrew Tridgell.

Since I’d already been using rsync quite a bit over the last 10 years or so to copy data around as I needed it and to perform nightly full backups of my remote servers, I decided to look into using rsync to manage a new backup solution based around incremental backups as well as full backups.

I’m already using rsync to pull a couple of terabytes of mirrored data to my servers on a nightly basis. I’m mirroring CPAN, FreeBSD, Project Gutenberg, Cygwin, Wikipedia and several other key projects, so this was a natural graft onto my existing environment.

Read the rest of this entry »

Converting FLOSS Manuals to Plucker format

Tags: ,

I stumbled across a site called “FLOSS Manuals” recently, and thought that it would be a great place to create some new Plucker documents for our users, and distribute them. I’ve create hundreds of other Plucker documents for users in years past, so this was a natural progression of that.

You can (and should!) read all about their mission and see what they’re up to. Credit goes to Adam Hyde (the founder), Lotte Meijer (for the design), Aleksandar Erkalovic (developer) and “Mr Snow” for keeping the servers running, among many other contributors.

The FLOSS Manuals Foundation (Stichting FLOSS Manuals) creates free, open source documentation for free, open source software. FLOSS Manuals is a community of free documentation writers that publish free manuals about free software across multiple languages.

Free software can be freely run, studied, redistributed and improved without the restrictive and often expensive licensing systems attached to commercial proprietary software. Developers can adapt free software to their own needs, and can contribute to its ongoing communal development. FLOSS Manuals specifically document software that is free in this development sense and also in price. Free software projects are developed using established methodologies and tools, and sites like Savannah and Sourceforge support established social production models for developing free software. FLOSS Manuals provides the methodologies, tools, and social production models for developing documentation of free software.

By supporting quality, user-friendly documentation of Free, Libre, Open Source Software, FLOSS Manuals aims to encourage the use of this software, to support the technical and social revolution it enables.

If you want to support their cause (and I strongly recommend you do), you can visit their bookstore directly. (Note: I get absolutely nothing for hosting a link to their bookstore here; no affiliate links or commisions whatsoever).

When I quickly Googled around, I found someone was already doing exactly that, albeit in a one-off shell script.

I decided to take his work and build upon it, making it self-healing, and created what I call the “Plucker FLOSS Manuals Autobuilder v1.0” :)

This is all in Perl, clean, and is self-healing. When FLOSS Manuals updates their site with more content, this script will continue to be able to download, convert and build that new content for you… no twiddling necessary.

The code is well-commented, and should be clear and concise enough for you to be able to use it straight away.

Have fun!

#!/usr/bin/perl -w

use strict;
use warnings;
use diagnostics;
use LWP::UserAgent;
use HTML::SimpleLinkExtor;

my $flossurl    = 'http://en.flossmanuals.net';
my $ua          = 'Plucker FLOSS Manuals Autobuilder v1.0 [desrod@gnu-designs.com]';
my $top_extor   = HTML::SimpleLinkExtor->new();

# fetch the top-level page and extract the child pages
$top_extor->parse_url($flossurl, $ua);
my @links       = grep(m:^/:, $top_extor->a);
pop @links;     # get rid of '/news' item from @links; fragile

# Get the print-only page of each child page
get_printpages($flossurl . $_) for @links;

############################################################################# 
#
# Get the pages themselves, and return their content to the caller
#
#############################################################################
sub get_content {
        my $url = shift;

        my $ua          = 'Mozilla/5.0 (en-US; rv:1.4b) Gecko/20030514';
        my $browser     = LWP::UserAgent->new();
        $browser->agent($ua);
        my $response    = $browser->get($url);
        my $decoded     = $response->decoded_content;
 
        # This was necessary, because of a bug in ::SimpleLinkExtor,
        # otherwise this code would be 10 lines shorter. Sigh.
        if ($response->is_success) {
                return $decoded;
        }
}


############################################################################# 
#
# Fetch the print links from the child pages snarfed from the top-level page
#
#############################################################################
sub get_printpages {
        my $page = shift;

        my $sub_extor   = HTML::SimpleLinkExtor->new();
        $sub_extor->parse(get_content($page));

        # Single out only the /print links on each sub-page
        my @printlinks  = grep(m:^/.*/print$:, $sub_extor->a);

        my $url         = $flossurl . $printlinks[0];
        (my $title      = $printlinks[0]) =~ s,\/(\w+)\/print,$1,;

        # Build it with Plucker
        print "Building $title from $url\n";
        plucker_build($url, $title);
}


############################################################################# 
#
# Build the content with Plucker, using a "safe" system() call in list-mode
#
#############################################################################
sub plucker_build {
        my ($url, $title) = @_;

        my $workpath            = "/tmp/";
        my $pl_url              = $url;
        my $pl_bpp              = "8";   
        my $pl_compression      = "zlib";
        my $pl_title            = $title;
        my $pl_copyprevention   = "0";
        my $pl_no_url_info      = "0";
        my $pdb                 = $title;

        my $systemcmd   = "/usr/bin/plucker-build";

        my @systemargs  = (
                        '-p', $workpath, 
                        '-P', $workpath,
                        '-H', $pl_url,
                        $pl_bpp ? "--bpp=$pl_bpp" : (),
                        ($pl_compression ? "--${pl_compression}-compression" : ''),
                        '-N', $pl_title,
                        $pl_copyprevention ? $pl_copyprevention : (),
                        $pl_no_url_info ? $pl_no_url_info : (),
                        '-V1',
                        "--staybelow=$flossurl/floss/pub/$title/",
                        '--stayonhost',
                           '-f', "$pdb");

        system($systemcmd, @systemargs);
}

Hey, you got your TVs in my Internets

Tags:

A top Republican Internet strategist who was set to testify in a case alleging election tampering in 2004 in Ohio has died in a plane crash. Michael Connell was the chief IT consultant to Karl Rove and created websites for the Bush and McCain electoral campaigns.

Consultant Mike Connell Mike Connell crash site Karl Rove and George W. Bush

When rescue attempted to gain information about the accident, so they could determine what equipment to bring, they were told that information was “locked-out“:

Captain Geisner expressed considerable frustration during several Raw Story interviews over what he alleges was the withholding of critical details by authorities.

“While en route to the fire, I asked dispatch to learn the size of the plane and the number of souls on board,” Geisner explained. “This was not provided us.”

Such details allow fire department officials to determine whether additional equipment is needed and if a wider search and rescue is required. Within fifteen minutes of the crash, after officials from Akron-Canton Airport had arrived on the scene, Geisner again sought to confirm the number of passengers.

“After calls were made I was told that the ATC [Air Traffic Control] was ‘all in lockdown,’ and that they said ‘we can’t release that information,'” Geisner said.

Todd Laps, Fire Chief of the Akron-Canton airport fire department and a liaison to the Transportation Security Administration and the Air Traffic Control, echoed Geisner’s account.

“I had some phone calls placed to see if I could get that number [of people on board]. It didn’t come in a timely enough fashion,” Laps said.

But Laps says that the words “lock down” were not used. When asked to clarify his earlier comments, Gaisner insisted that the words “lock down” had been used in reference to information.

“That info was not available,” Gaisner said. “It was secured — not given out — locked down.”

Michael also set up the official Ohio State Election website reporting the 2004 Presidential Election returns. Connell was reportedly an experiences pilot. He died instantly on December 19, 2008 when his private plane crashed in a residential neighborhood near Akron, Ohio.

Connell’s track record makes it clear why he was viewed as a liability in some circles.

Ohio Republican Secretary of State J. Kenneth Blackwell hired Connell in 2004 to create a real-time computer data compilation for counting Ohio’s votes. Under Connell’s supervision, Ohio’s presidential vote count was transmitted to private, partisan computer servers owned by SmartTech housed in the basement of the Old Pioneer Bank building in Chattanooga, Tennessee.

Connell’s company, New Media Communications worked closely with SmartTech in building Republican and right-wing websites that were hosted on SmartTech servers. Among Connell’s clients were the Republican National Committee, Swift Boat Veterans for Truth and gwb43.com. The SmartTech servers at one point housed Karl Rove’s emails. Some of Rove’s email files have since mysteriously disappeared despite repeated court-sanctioned attempts to review them.

At 12:20 am on the night of the 2004 election exit polls and initial vote counts showed John Kerry the clear winner of Ohio’s presidential campaign. The Buckeye State’s 20 electoral votes would have given Kerry the presidency.

But from then until around 2am, the flow of information mysteriously ceased. After that, the vote count shifted dramatically to George W. Bush, ultimately giving him a second term. In the end there was a 6.7 percent diversion—in Bush’s favor—between highly professional, nationally funded exit polls and the final official vote count as tabulated by Blackwell and Connell. [source]

Watch this video, filled with pieces from Stephen Spoonamore, Cliff Arnebeck and others describing exactly what they believe happened here:

Michael Connell was deposed 1 day before the election this year by attorneys Cliff Arnebeck and Bob Fitrakis about his actions during the 2004 vote count and his access to Karl Rove’s email files and how they “went missing”.

Attorney Cliff Aenebeck

Bob Fitrakis, co-author of "Did George W. Bush Steal America's 2004 Election?" at the Election Protection Forum;Ohio Green Party candidate

Bob Fitrakis, co-author of “Did George W. Bush Steal America’s 2004 Election?” at the Election Protection Forum;Ohio Green Party candidate

Velvet Revolution reported that Michael Connell was afraid that Bush and rove would “throw him under the bus” if he was to expose what he knew or covered up.

Some pretty shocking news is coming out about the crash, much of it seems… eerily familiar.

“Under different political circumstances it might be possible to dismiss the Eveleth crash as a tragic accident whose causes, even if they cannot be precisely determined, lie in the sphere of aircraft engineering and weather phenomena. But the death of Paul Wellstone takes place under conditions in which far too many strange things are happening in America.

Wellstone’s death comes almost two years to the day after a similar plane crash killed another Democratic Senate hopeful locked in a tight election contest, Missouri Governor Mel Carnahan, on October 16, 2000. The American media duly noted the “eerie coincidence,” as though it was a statistical oddity, rather than suggesting a pattern.” [source]

An interesting anecdote about the Wellstone plane crash was that FBI agents arrived on the scene within an hour of first-responders reaching the site. This would be impossible, given the remote location of the crash, unless they left to head to the crash site, just as Wellstone’s plane was leaving the runway to take off for the flight.

We apparently have a fascination in this country by engineering cover-ups around aircraft, it seems, by this “crime family masquerading as a political party”.

This follows shortly before members of the White House were ordered to search employee computers for missing e-mails.

“A federal court ordered on Wednesday all employees working in the Bush White House to surrender media that might contain e-mails sent or received during a two and a half year period in hope of locating missing messages before President-elect Barack Obama takes over next week.”

Not only were emails missing, but 700 days of emails went missing. That’s not just a mistake, that is intentional. That is almost TWO FULL YEARS of emails and backups that just up and vanished?

If your primary job is a System Administrator or Mail Administrator, missing a tape or two happens, forgetting to schedule a weekend backup happens… but forgetting to back up mail for two full years? No. That’s not a mistake, that’s a cover-up.

The Presidential Records Act of 1978 specifically holds the President responsible for this.

“Places the responsibility for the custody and management of incumbent Presidential records with the President.”

So when are we going to start holding these United States Citizens personally responsible for the crimes they’ve committed against the best interests of this country?

Querying the health of your domain and DNS

Tags:

I run a lot of domains for clients, Open Source projects and my own pet projects… and keeping them all registered, updated and proper zone files for forward and reverse DNS can be complicated. I run my own DNS, and would never trust a third-party to do it again. I used to use a third-party to manage my DNS, but their web-based system was clunky and wasn’t as fast as I needed it to be.

But checking the quality of your DNS records is another matter entirely. For example, there’s a huge difference between writing HTML, and writing valid HTML. This is why HTML validation exists.

Likewise, there is also a need for “DNS validation”. Enter DNS health and checking tools.

Previously, I used a free service called “DNS Report” from DNS Stuff, and it worked great… but decided to go non-free, and requires subscription to get to the same report data that they used to provide gratis. Seems that whenever someone feels they can charge for something, they do.

I’ve been seeking out another alternative, something free and full-featured. There are quite a few, and some are shady… but here’s the list I’ve found, along with my personal review of their “quality”:

CheckDNS

http://www.checkdns.net/

This is a no-frills DNS checking service. It basically gives you a quick rundown of your domain through the root servers, your local nameservers, the version correlation (making sure the serial in your zone file matches), your webserver and your mail server.

Pros: It just works, plain-jane simple. I wish it had more detail like the ability to check reverse DNS, traceroute, check route status, rate the speed to resolve DNS queries and so on.

Cons: No suggestions for resolving anything marked as an issue or conflict. If you know DNS inside and out, the errors are obvious, but if you don’t… it can be cryptic. For example, my mailserver is greylisting all incoming connections, so it will return a 421 response instead of the expected 250 response. Their incoming probe looks like the following to my DNS server:

Nov 30 15:53:15 neptune sm-mta[11904]: mAUKrEPP011904: Milter: from=, reject=421 4.3.2 graylisted - please try again later

intoDNS

http://www.intodns.com/

Pros: Simple and fast. The results it returns are very similar and almost identical to the ones provided by DNSReport. Here is one example against one of my most heavily-hit domains; plkr.org.

Cons: No real details on how to fix the issues it reports. It may report that your SOA refresh is not correct, but lacks any recommendations on how to fix it (i.e. increase/decrease the timeout, etc.)

ZoneCheck

http://www.zonecheck.fr/

Pros: Fast, clean and tests a lot of various bits about your DNS: SOA, coherence, serial, illegal characters, ip-to-ns matching and so on. Very thorough.

Cons: While it is powerful, the resulting report isn’t exactly the most user-friendly, and the initial interface is… well, clunky as well.

Here’s a sample of the output from one of my domains:

     Testing: misused '@' characters in SOA contact name (IP=72.36.135.42)
     Testing: illegal characters in SOA contact name (IP=72.36.135.42)
     Testing: serial number of the form YYYYMMDDnn (IP=72.36.135.42)
     Testing: SOA 'expire' between 1W and 6W (IP=72.36.135.42)
     Testing: SOA 'minimum' between 3M and 1W (IP=72.36.135.42)
     Testing: SOA 'refresh' between 1H and 2D (IP=72.36.135.42)
     Testing: SOA 'retry' between 15M and 1D (IP=72.36.135.42)
     Testing: SOA 'retry' lower than 'refresh' (IP=72.36.135.42)
     Testing: SOA 'expire' at least 7 times 'refresh' (IP=72.36.135.42)
     Testing: SOA master is not an alias (IP=72.36.135.42)
     Testing: behaviour against AAAA query (IP=67.126.192.9)
     Testing: coherence between SOA and ANY records (IP=72.36.135.42)
     Testing: SOA record present (IP=67.126.192.9)
     Testing: SOA authoritative answer (IP=67.126.192.9)
     Testing: coherence of serial number with primary nameserver (IP=72.36.135.42)
     Testing: coherence of administrative contact with primary nameserver (IP=72.36.135.42)
     Testing: coherence of master with primary nameserver (IP=72.36.135.42)
     Testing: coherence of SOA with primary nameserver (IP=72.36.135.42)
     Testing: NS record present (IP=72.36.135.42)
     Testing: NS authoritative answer (IP=72.36.135.42)
     Testing: given primary nameserver is primary (IP=67.126.192.9)

And the results from that:

Test results
  ---- warning ----
   w: Reverse for the nameserver IP address doesn't match
     * ns.plkr.org./72.36.135.42
     * ns2.plkr.org./67.126.192.9

   w: [TEST delegated domain is not an open relay]: Mail error (Unexpected closing of connection)
     * generic

   w: [TEST can deliver email to 'postmaster']: Mail error (Unexpected closing of connection)
     * generic

   w: [TEST domain of the hostmaster email is not an open relay]: Mail error (Unexpected closing of connection)
     * generic

  ---- fatal ----

   f: [TEST can deliver email to hostmaster]: Mail error (Unexpected closing of connection)
     * generic

Final status

   FAILURE (and 5 warning(s))

Network Tools

http://network-tools.com/

Pros: You get what you get. Just information, in a raw, unstructured way.

Cons: Clunky, inconsistent GUI, information returned is returned haphazardly, in a very unstructured and unintuitive way.

iptools

http://www.iptools.com/

Pros: Lots of tools to check the health of your domain, dns, dns records, IP, routing and so on.

Cons: Bad colors and an unstructured user experience.

The UI could use a bit of work and the blue and white is a bit painful on the eyes, but you get what you get. They’re basically using OSS and other tools under the hood to make this work (dig, in at least one case). This could leave them subject to some interesting exploits.

DNS Tools from Domain Tools

http://dns-tools.domaintools.com/

Pros: It is what it is, another plain-jane DNS query service. It allows you to ping, traceroute and report on the zone records for the domain you enter.

Cons: Too basic, not very useful above and beyond what I can do on my own from my own server.

This one, like some of the others, just wraps common OSS tools to query DNS records, and presents them in an unstructured, “raw” format. No attempts to make any suggestions or recommendations to any issues that are reported.

Free DNS Report

http://www.dnscolos.com/dnsreport.php

This looks suspiciously-similar to DNS Report’s older UI. Some have suggested that this is a scam site, harvesting domains for parking or hijacking by poisoning the DNS of misconfigured domains. Mine domains are fine and secured, so I don’t mind testing them through this.

Pros: They actually do provide some basic recommendations to help resolve issues that are reported.

Cons: Not enough detail or depth on the DNS, zone, MX or domain itself. It is about 1/4 of what dnsreport was.

You Get Signal

http://www.yougetsignal.com/

Pros: Positive marks for the most-unique and humorous domain name. You can do ping, visual traceroute, reverse domain lookups, port-forwarding tester and so on. Not as full-featured as some of the others, but the information provided is somewhat structured in nature.

Cons: They made some good attempts at structure and visual appeal. They could use a bit more polish and more tools to round out the “suite” they provide, but it is what it is. The interface does “overlap” in places, tucking the output underneath other bits of the HTML and the maps, but you can select the text in your browser and paste it elsewhere to read it if you want.

Conclusions

While a lot of the tools make attempts to provide what you need to make sure your domains, MX, IP, routing and so on is correct, none of them really match what dnsreport used to provide for free. If I had to choose one out of the list above, I would choose intoDNS for First Place and CheckDNS for a close Second Place.

Ultimately, I may just write my own to do this, and make it spiffy. That’s the worst part about being in “First Place” (as dnsreport was): It’s easy to see where you missed the market, and open up a field for competition to dive in and take it from you.

I did something similar for my SEO keyword analysis tool. I was so frustrated with the inferior, broken alternatives out there… that I just wrote my own. Free, gratis, go play and have fun. It works for me and that’s why I wrote it.

Validating Blog Pingback Sites with Perl

Tags: , ,

Over the last few months I’ve been wondering what the slow response time has been when I am posting new entries to my blog. Granted, the hits to my blog have more than tripled in the last 2-3 months, but my servers can handle that load. The problem was clearly elsewhere.

Some more digging and I realized that the list of 116 ping sites I have in my blog’s “Update Services” list contains quite a few pingback sites that are no longer valid.

For those new to this, a “Pingback” is a specifically-designed XML-RPC request sent from one website (A, usually a blog site) to another site (B) in response to a new entry being posted on the site.

In order for pings (not to be confused with ICMP pings) to work properly, it requires a physical link in the form of a URL to validate. When (B) receives the notification signal from (A), it automatically goes back to (A) checking for the existence of a live incoming link to (B). If that link exists, the Pingback is recorded successfully.

This “validation” process makes it much less prone to spam attacks than something like Trackbacks. If you’re interested in reading more about how spammers are using Pingbacks and Trackbacks to their advantage, I suggest reading Blog trackback Spam analysis on the “From Information to Intelligence” blog site.

But I needed a way to test all of those ping sites and exclude the ones that are dead, down or throwing invalid HTTP responses… so I turned to Perl, of course!

My list of ping sites is a sorted, uniq’d plain-text list that has one ping site per-line. The list looks something like this:

http://api.moreover.com/ping
http://bitacoras.net/ping
http://blog.goo.ne.jp/XMLRPC
http://blogoole.com/ping
http://blogsearch.google.com/ping/RPC2
http://godesigngroup.com
http://godesigngroup.com/blog/feed
http://imblogs.net/ping
http://ping.bitacoras.com
http://ping.bloggers.jp/rpc
http://ping.blo.gs
http://pinger.blogflux.com/rpc
http://pinger.onejavastreet.com/
http://ping.myblog.jp
http://pingoat.com
http://pingomatic.com
http://rcs.datashed.net/RPC2
http://rpc.blogbuzzmachine.com/RPC2
http://rpc.blogrolling.com/pinger
http://rpc.pingomatic.com
http://rpc.weblogs.com/RPC2
http://rpc.wpkeys.com
...

I pass that list into my perl script and using one of my favorite modules (File::Slurp), I read that file and process each line with the following script:

use strict;
use URI;
use File::Slurp;
use HTTP::Request;
use LWP::UserAgent;

my @ping_sites          = read_file("pings");
my @valid_ping_sites    = ();

for my $untested (@ping_sites) {
        my $url         = URI->new($untested);

        my $ua          = LWP::UserAgent->new;
        $ua->agent('blog.gnu Ping Spider, v0.1 [rss]');
        $ua->timeout(10);

        my $req         = HTTP::Request->new(HEAD=>"$untested");
        my $resp        = $ua->request($req);
        my $status_line = $resp->status_line;

        (my $status)    = $status_line =~ m/(\d+)/;

        if ($status == '200') {
                push @valid_ping_sites, "$url\n";
        } else {
                print "[$status] for $url..\n";
        }
}

my $output      = write_file("pings.valid", @valid_ping_sites);

The output is written to a file called “pings.valid“, which contains all of the sites which return a valid 200 HTTP response. The remainder are sent to STDOUT, resulting in the following output:

$ perl ./pings.pl 
[403] for http://1470.net/api/ping..
[403] for http://api.feedster.com/ping..
[404] for http://bblog.com/ping.php..
[500] for http://blogbot.dk/io/xml-rpc.php..
[403] for http://blogmatcher.com/u.php..
[500] for http://blogsnow.com/ping..
[404] for http://fgiasson.com/pings/ping.php..
[404] for http://pingoat.com/goat/RPC2..
[500] for http://ping.syndic8.com/xmlrpc.php..
[500] for http://ping.weblogalot.com/rpc.php..
[403] for http://popdex.com/addsite.php..
[404] for http://www.blogdigger.com/RPC2..
[500] for http://www.blogsnow.com/ping..
[500] for http://www.blogstreet.com/xrbin/xmlrpc.cgi..
[404] for http://www.catapings.com/ping.php..
[500] for http://www.focuslook.com/ping.php..
[500] for http://www.holycowdude.com/rpc/ping..
[403] for http://www.popdex.com/addsite.php..
[500] for http://xmlrpc.blogg.de..
...

Those failed entries are then excluded from my list, which I import back into WordPress under “Settings → Writing → Update Services”.

The complete, VALID list of ping sites as of the date of this blog posting are the following 49 sites (marking 58% of the list I started with as invalid):

http://1470.net/api/ping
http://api.feedster.com/ping
http://api.moreover.com/ping
http://bblog.com/ping.php
http://bitacoras.net/ping
http://blogbot.dk/io/xml-rpc.php
http://blog.goo.ne.jp/XMLRPC
http://blogmatcher.com/u.php
http://blogoole.com/ping
http://blogsearch.google.com/ping/RPC2
http://blogsnow.com/ping
http://fgiasson.com/pings/ping.php
http://godesigngroup.com
http://godesigngroup.com/blog/feed
http://imblogs.net/ping
http://ping.bitacoras.com
http://ping.bloggers.jp/rpc
http://ping.blo.gs
http://pinger.blogflux.com/rpc
http://pinger.onejavastreet.com/
http://ping.myblog.jp
http://pingoat.com
http://pingoat.com/goat/RPC2
http://pingomatic.com
http://ping.syndic8.com/xmlrpc.php
http://ping.weblogalot.com/rpc.php
http://popdex.com/addsite.php
http://rcs.datashed.net/RPC2
http://rpc.blogbuzzmachine.com/RPC2
http://rpc.blogrolling.com/pinger
http://rpc.pingomatic.com
http://rpc.weblogs.com/RPC2
http://rpc.wpkeys.com
http://www.a2b.cc/setloc/bp.a2b
http://www.blogdigger.com/RPC2
http://www.blogsdominicanos.com/ping/
http://www.blogsnow.com/ping
http://www.blogstreet.com/xrbin/xmlrpc.cgi
http://www.catapings.com/ping.php
http://www.feedsky.com/api/RPC2
http://www.focuslook.com/ping.php
http://www.godesigngroup.com
http://www.holycowdude.com/rpc/ping
http://www.imblogs.net/ping
http://www.pingmyblog.com/
http://www.popdex.com/addsite.php
http://www.wasalive.com/ping/
http://www.xianguo.com/xmlrpc/ping.php
http://xmlrpc.blogg.de

Feel free to use this list in your own blog or pingback list.

If you have sites that aren’t on this list, add them to the comments and I’ll keep this list updated with any new ones that arrive.

Locking more of the web down behind TLS and SSL

Tags:

Apache Foundation logoThis is another case of yak shaving that all started with trying to implement imapproxy to proxy internal IMAP connections between Dovecot and SquirrelMail on my public servers.

Implementing imapproxy was a simple drop-in. All that was required was some server-side configuration to get Dovecot to listen to the server port that imapproxy uses, and then get imapproxy to listen on the public (external) port for clients to connect to.

In my /etc/dovecot/dovecot.conf, I set up the following:

protocols = imap imaps
protocol imap {
        listen = 127.0.0.1:14300
        ssl_listen = *:993
}
...
ssl_cert_file = /etc/ssl/certs/dovecot-ssl.crt
ssl_key_file = /etc/ssl/private/dovecot-ssl.key

In /etc/imapproxy.conf, I configured it as follows:

server_hostname 127.0.0.1
listen_port 143
listen_address 127.0.0.1
server_port 14300
...
tls_cert_file /etc/ssl/certs/dovecot-ssl.crt
tls_key_file /etc/ssl/private/dovecot-ssl.key

Restarting both, and now IMAP connections are proxied and kept open for the duration of the session. It is visibly faster now when interacting with IMAP over that connection.

For SquirrelMail, I had to tweak accordingly as well to listen on port 14300 on host 127.0.0.1. Under SquirrelMail’s config (Option 2 → A → 5 under the configure script), I changed the port to 14300. That now gets SquirrelMail talking to imapproxy, speeding up webmail by several orders of magnitude.

But it was still in the clear. Unfortuntely, there’s no easy way to just plug SquirrelMail into IMAP over SSL… so I had to use stunnel to do that:

/usr/bin/stunnel -P/var/run/ -c -d 1430 -r 127.0.0.1:993

Now I went back into SquirrelMail’s config and changed the port to 1430 from 14300. Now SquirrelMail is talking to the local imapproxy → Dovecot over SSL instead of plain text.

But now my Dovecot certs needed to be regenerated because they were close to expiring, and with the recent Debian PRNG problem, it was better to just re-gen all certs and keys anyway.

To do that, I had to do the following:

$ openssl genrsa -out dovecot-ssl.key 4096

$ openssl req -new -key dovecot-ssl.key -out dovecot-ssl.csr

I pasted the contents of that final CSR (dovecot-ssl.csr above) into the form at CACert and had them generate a new server certificate for my mail host: mail.gnu-designs.com, where my Dovecot instance resides. With that, I put those keys in their proper location and configured /etc/dovecot/dovecot.conf accordingly:

ssl_cert_file = /etc/ssl/certs/dovecot-ssl.crt
ssl_key_file = /etc/ssl/private/dovecot-ssl.key

Restarted Dovecot and now I’m properly secured with stronger, less vulnerable keys and certs.

But what about locking down SquirrelMail behind SSL as well?

To do that, I had to update my DNS to point mail.gnu-designs.com to a separate physical IP on the same machine. With Apache, you can’t have more than one SSL VirtualHost behind the same physical IP. Each new SSL host you want to deploy has to be on its own physical IP address.

So I had to change my DNS to point mail.gnu-designs.com from its present IP to a new IP on the same host. Now comes the tricky part… configuring Apache.

Since I run Debian, the Apache configuration is a bit… non-standard. In /etc/apache2/ports.conf, I had to change the Listen directive to listen on port 443 of that new IP.

Listen 72.36.135.43:443
Listen 72.36.135.43:80

And a VirtualHost stanza for that new SSL vhost had to be created..

Now my regular non-SSL stanza can be changed to look like this:

<VirtualHost *:80>
        ServerName mail.gnu-designs.com
        Redirect permanent / https://mail.gnu-designs.com/
</VirtualHost>

This will redirect non-SSL clients to the SSL version of the site, so their session is secured behind SSL on port 443. One last poke to make it possible to use the SSL VirtualHosts without having to import the upstream Root CA Certificate:

SSLCACertificateFile ssl.certificates/cacert-root.crt

From the Apache 2.x documentation:

This directive sets the all-in-one file where you can assemble the Certificates of Certification Authorities (CA) whose clients you deal with. These are used for Client Authentication. Such a file is simply the concatenation of the various PEM-encoded Certificate files, in order of preference.

I duplicated the same process for my other SSL vhost; spam.gnu-designs.com, for the DSPAM web interface.

If you’re not using dspam to reduce your spam by 99.9%, you should be. It runs circles around every OSS and commercial product I’ve tried, and I’ve been running it for years (see my previous posts on dspam for more background and hard data).

Conclusion:

I did a few things here:

  1. Set up an IMAP proxy in front of Dovecot, my IMAP server which dramatically increased the responsiveness of the IMAP server
  2. Configured that proxy to speak SSL (imaps on port 993) as well as plain imap (port 143)
  3. Configured SquirrelMail to talk to the IMAP proxy over SSL only, using stunnel
  4. Locked down two of my public-facing Apache vhosts with SSL (webmail and dspam)
  5. Regenerated all SSL certificates and keys with stronger encryption using CACert
  6. Imported the CACert root certificate and made it global within all of my Apache SSL vhosts

Now everything is a bit more secure than it was before… for now.

Last note: As I was writing this post, I realized that WordPress was eating some characters in my
<code> … </code> blocks. I looked around for a plugin to try to alleviate that, and found several, none of which worked properly.

I tried Code Auto-Escape which at first glance looked promising, but all it did was encode my code blocks into a single-line base64 string, and output that. Blech.

Then I tried one called Code Markup which had a very detailed explanation and several ways to use it. It too failed miserably on the most basic of code blocks (the Apache stanzas above).

It referenced several other markup and syntax highlighting plugins (geshi, highlighting with Enscript, etc.), none of these worked as advertised either.

What I finally found that DID work, was a a Java-based tool called Code Format Helper for WordPress. Basically you paste your code block into the small java applet, and it converts all of the entities to encoded entities. You then paste that into your WordPress post and submit that. You can see in the above post that it works perfectly.

Voila!

Bad Behavior has blocked 1733 access attempts in the last 7 days.