AKA, What good is a checksum, anyhow?

A lot of download sites present checksums for you to check that what they host is actually what you download. I, for one, have always been dubious of such measures, and the recent Linux Mint breach proves what I’ve always suspected.
Continue reading “Linux Mint Breach Lessons”

[Re-blogged from The Guardian]

The law requires a balance between flexibility and tyranny, and was never intended to allow the government to dictate software design

All Writs Act: Congress wanted to give the government enough power to govern effectively, but also to set up limits so that the new government didn’t become a tyranny. Photograph: Nicholas Kamm/AFP/Getty Images

Apple’s celebrated fight with the FBI over the security of its encrypted iPhones has shone the spotlight on an old and obscure federal law from 1789 known as the All Writs Act (AWA).

The AWA is a short little statute, giving federal courts the power to “issue all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law.”

The FBI argues that the AWA empowers a court to order Apple to create custom software to circumvent the security on an iPhone possessed by one of the San Bernadino shooting suspects.

Passed by the First Congress in 1789, this little law is a piece of Swiss Army knife legislation that the FBI is trying to turn into a giant sword, out of all proportion to what it is supposed to do. But if we want to make sense of the current security and privacy controversy pitting the FBI against the tech giant, it helps to understand what the AWA is and what its limits are.

Read more at The Guardian.

The arguments for encryption backdoors are so ludicrous, they either have to be morons or liars.

So why do they continue to argue for these backdoor mechanisms, now more loudly than ever?

The answer appears to be that they’re lying to us.

This is a reblog of an article at Lauren Weinstein’s Blog

Despite a lack of firm evidence to suggest that the terrorist attackers in Paris, in San Bernardino, or at the Planned Parenthood center in Colorado used strong (or perhaps any) encryption to plan their killing sprees, government authorities around the planet — true to the long-standing predictions of myself and others that terrorist attacks would be exploited in this manner — are once again attempting to leverage these horrific events into arguments for requiring “backdoor” government access to the encryption systems that increasingly protect ordinary people everywhere.

This comes despite the virtual unanimity among reputable computer scientists and other encryption experts that such “master keys” to these encryption systems that protect our financial and ever more aspects of our personal lives would be fundamentally weakened by such a government access mechanism, exposing us all to exploits both via mistakes and purposeful abuse, potentially by governments and outside attacks on our data.

It’s difficult — one might say laughable — to take many of these government arguments seriously even in the first place, given the gross incompetence demonstrated by the U.S. government in breaches that exposed millions of citizens’ personal information and vast quantities of NSA secrets — and with similar events occurring around the world at the hands of other governments.

But there are smart people in government too, who fully understand the technical realities of modern strong encryption systems and how backdoors would catastrophically weaken them.

So why do they continue to argue for these backdoor mechanisms, now more loudly than ever?

The answer appears to be that they’re lying to us.

Or if lying seems like too strong a word, we could alternatively say they’re being “incredibly disingenuous” in their arguments.

You don’t need to be a computer scientist to follow the logic of how we reach this unfortunate and frankly disheartening determination regarding governments’ invocation of terrorism as an excuse for demanding crypto backdoors for authorities’ use.

We start with a fundamental fact….

Read more at Lauren Weinstein’s Blog

This is a suggested template for malware removal guidelines document.

This document consists of a set of suggested guidelines and steps to aid in the successful removal of malware. No set of steps can 100% guaranty the state of any machine as far as malware infection goes, but following this guide will hopefully provide a framework to make malware removal more complete, more successful and less likely to end up with wiping the user’s system and starting over.

Factors in determining this must balance the inconvenience of the enduser in needing to backup and restore documents and settings of the various applications they use on a daily basis against the need for security and time allocated in cleaning the malware off of the machine.

Continue reading “Malware Removal Guidelines”

Don’t ring the ‘dorbell’; no one’s home.

Gerald-G-Police-man

I was looking through my logwatch log one day, and I came across some of the strangest looking hits I’ve ever seen.

/Ringing.at.your.dorbell!: 1 Time(s)

Looking at the original other_vhosts_access.log file, I saw:

www.johndstech.com:80 125.25.26.121 - - [12/Jul/2015:10:30:31 -0600] "GET /Ringing.at.your.dorbell! HTTP/1.0" 404 31386 "http://google.com/search?q=2+guys+1+horse" "x00_-gawa.sa.pilipinas.2015"

My first thought was that it was some sort of strange joke, but it occurred again the next day, and so it became obvious it was something worth looking into. As it turns out, this is an attempt at exploiting the shellshock vulnerability. Script kiddies aren’t too bright, so they just copy and paste old vulnerabilities and try over and over again. So, how best to block stupid URLs like this?

I could have elected to block the traffic by referrer, but weighing the pros and cons of this came down on the con side for me. After all, the referrer isn’t necessarily what I want to block, and the skepticism.us link already points out two of them. No, I want to block stupid URLs, not the referrer.

Well, a little research came up with ModSecurity, aka mod_security. It seemed like the ideal choice, and it uses Apache syntax and config files. So, I proceeded to implement it and hit a wall — hard. I banged my head here, and I banged my head there, but all I got was a headache.

That’s because I was wasting my time, at least a couple of hours. It turns out to not be so WordPress friendly. That actually makes little sense, since the examples almost always consist of using a PHP test script to block a MySQL injection! The work around is to exclude the WordPress directories! That makes absolutely no sense when WordPress is the main platform!

So, I was without a means to block stupid URLs. Fail2ban blocks IP addresses, but only after failed access attempts, particularly bad logins. However, it turns out that there still are a couple of rather reactive alternatives.

Method 1: Iptables

The best seems to be to use iptables. Granted, that is a little intimidating, but fortunately there are plenty of examples on the web.

The best one I’ve seen so far is “Linux : using iptables string-matching filter to block vulnerability scanners” at SpamCle@ner. It is easy to follow, although I’ll admit I only followed the first part of it. The downsides they point out is that it will always filter port 80, but since I use ufw and other stuff that seems like a minor downside, and “it can cause errors (“false positive”)“, which seems very unlikely for the type of junk we are talking about here.

On the upside, this blocks the traffic before it even hits the Apache server. On the downside, you have to find a way to save it, else the rules disappear upon a server reboot.

There is a workaround for this, and it involves installing iptables-persistent. Read up on this and how to save the iptables at “Saving Iptables Firewall Rules Permanently“.

2. Ban repetitive stuff from logs

If you want a lower quality type after the fact sort of ban, you could also implement a script that does some of the repetitive blocking for you. For example:

#!/bin/bash
echo "Must run as root!"
LOGFILE="/some/path/to/idiot.log"

banhttp()
{
   for II in $(grep "$1" /var/log/apache2/*.log | cut -d' ' -f2 | sort -u)
   do
      ufw insert 5 deny from "$II" >> "$LOGFILE"
      echo "$II" >> "$LOGFILE"
   done
}

banwp()
{
   for II in $(grep "$1" /var/log/auth.log | cut -d' ' -f11 | sort -u)
   do
      ufw insert 5 deny from "$II" >> "$LOGFILE"
      echo "$II" >> "$LOGFILE"
   done
}

for JJ in 'POST /xmlrpc.php' '/Diagnostics.asp' '/Ringing.at.your.dorbell!' \
'/wiki/Five_Weird_Tricks_for_Stair' '/wiki/What_You_Need_to_Understand_About_Cardio_Training'
do
   banhttp "$JJ"
done

for JJ in 'PaigeBiehl3540' 'Sabina27Y002' 'FrancineSmith23'
do
   banwp "$JJ"
done

In theory, this latter method might be easier to maintain for newbies, but over time I suspect that this could become unwieldy. OTOH, it requires no direct fiddling with iptables, saving them, etc.

In reality, the best approach might be a hybrid one. Some of the more obvious could go into iptables directly, especially when they are so common that you are blocking stupid stuff every day and/or they are obvious hacking attempts. Some of the less serious stupid stuff could just go into the script which will block IPs that keep banging on links that don’t exist, never have existed and just plain are tiring to look at.

 

 

 

TLDR: Stuff happens.

The_page_cannot_be_displayed_error-101408_640

Various news outlets are abuzz about the United Airlines and New York Stock Exchange technical glitches today. They happened within hours of each other, so it is only natural that someone somewhere would wonder whether or not they were hacked. Continue reading “United and NY Stock Exchange Outages Due to Hackers?”

After some initial excitement that TrueCrypt would carry on in some form via CipherShed, I’ve come to the conclusion that it is dead, Jim.

CipherShed logo
CipherShed logo

For some time, I’ve been using encryption on Linux just in case my laptop gets stolen. However, towers and desktops can be stolen as well. It’s just not as easy to do so, but it is far from impossible. So, I’ve been wanting to encrypt the data there as well, which is really the same data backed up anyhow.
Continue reading “Giving Up on CipherShed”

flash-logo

I keep waiting for the day when Adobe Flash is a thing of the past. Unfortunately, there are a lot, and I mean a lot, of older sites out there that use it. So, if you find yourself having to use it, at least add a layer of protection where it will prompt you as to whether or not it will run.

Old Goat Guide recently posted “Using Adobe Flash Player Responsibly” that gives instructions on how to set Internet Explorer so that you have to click to run Flash. Thankfully, Firefox now has this setting as the default, although it is called “Ask to activate”. If you are running Chrome, I suggest using the built-in PepperFlash (also available for Chromium, usually as a separate download), as I have found it more stable and less of a target for hackers.

Tired of getting probed? Here is one way to automatically add probing sites to ufw.

tone_and_probe

It sometimes seems that there isn’t a range of IP addresses that isn’t filled with idiots who have no life. They are sleaze who won’t go out and earn an honest living. Running a website requires vigilance, and I’ve learned the hard way that you cannot outsource this to some company that throws up some hardware but won’t lift a finger to help you resolve real issues. However, being vigilant shouldn’t mean that you don’t have any more of a life than the idiots who are out causing problems.

Logwatch is a very useful utility for summarizing, analyzing and reporting issues found in various logs on the system. It simplifies everything because you would otherwise be sifting through dozens, literally, of log files on the system looking for problems.

One of the useful features is that it looks for website probing. It doesn’t seem to catch everything, but it catches enough that if it reports on it, you should act on it and not delay. You could, of course, manually block the IP addresses it reports as a probe, and I did that for some time, but it is a continuous process.  Continuous, monotonous tasks are exactly the sort of thing computers were made for, so why not automate as much as is reasonable and leave only the more difficult things in the log for human eyes?  After all, if it is reporting on it, it is egregious enough of an activity to block the IP either individually or within a given range.

So, I wrote a script that could parse the input and email the resulting file. Instead of calling sendmail, then, you tell logwatch to “email” the output through this script, which I called logwatchproc.bash, which will take care of the rest.

I should mention that if you follow DigitalOcean’s instructions in the Logwatch link above, make a note of a couple of things:

It is bad form to ever modify distributed config files. They have a tendency to get overwritten. Furthermore, it turns out it won’t even have the expected behavior. Be sure to:

  1. mkdir /var/cache/logwatch
    cp /usr/share/logwatch/default.conf/logwatch.conf /etc/logwatch/conf/

    Then, you can edit the file in /etc/logwatch/conf comfortably.

  2. Change the line:
    mailer = "/usr/sbin/sendmail -t"

    to

    mailer = "/usr/bin/logwatchproc"

Next, you will want to create the file. I recommend putting it in the home directory of an account used for maintenance (which means not in root’s home), and then linking the file into /usr/bin.

Use your favorite linux (not DOS/Windows, unless you want problems) editor and paste this into it:

#!/bin/bash

[ $# -ge 1 -a -f "$1" ] && input="$1" || input="-"
MYBASE="/home/NameOfUser" # Preferably, whatever user you use for maintenance
LOGMAIL="${MYBASE}/logwatchmail.tmp"
LOGLOG="${MYBASE}/logwatchproc.log"
PROBEFILE="${MYBASE}/probesites.txt"
TODAY=$(date)
echo "=========" >> "${LOGLOG}"
echo "${TODAY}" >> "${LOGLOG}"
# Save it first
cat $input > "${LOGMAIL}"
# Email it before something happens
cat "${LOGMAIL}" | sendmail -t
sleep 30
NUMSITES="$(grep probed ${LOGMAIL} | cut -d' ' -f5 )"
echo "NUMSITES = ${NUMSITES}" | tee -a "${LOGLOG}"
if [ "${NUMSITES}." = "." ]
then
	NUMSITES=0
fi
if [ ${NUMSITES} -gt 0 ]
then
	grep probed -A "$NUMSITES" "${LOGMAIL}" | tail -"$NUMSITES" > "${PROBEFILE}"

	for II in $(cat "${PROBEFILE}")
	do
		echo "$II" >> "${LOGLOG}"
		ufw insert 3 deny from "$II"  >> "${LOGLOG}"
	done
else
	echo "No further actions needed." >> "${LOGLOG}"
fi

Be sure to change “NameOfUser” to the maintenance account login name, and save it in a convenient location in that accou nt’s home directory, ex: /home/NameOfUser/bin, for testing. Notice as well that I use “ufw insert 3” to keep it near the top (so it doesn’t interfere with later ALLOW commands). If you have any allows at the top you don’t want to overwrite, be sure to adjust this as necessary.

Next, make a symbolic link to it:

ln -s /home/NameOfUser/bin/logwatchproc.bash /usr/bin/logwatchproc

You can test it manually by calling /etc/cron.daily/00logwatch as root. Initially, you might want to test using the sudo command, but it is better to do an “su -” and change to root for final testing, as environment variables can really affect bash significantly.

That’s it!