Review: Penetration Testing with BackBox

posted in review, security on 25 Apr 2014,

Full-disclosure: I was asked by PacktPublishing to provide a review of Penetration Testing with BackBox by Stefan Umit Uygur. They offered me a free copy of the ebook; otherwise I have not been compensated by any means for this review.

The book aims to be an introduction to penetration-testing for experienced Unix/Linux users or administrators (seems like there are Linux users that aren’t administrators by now). After reading the book I believe that the assumed use-case is an administrator that wants to gain some insight into the tools that might be used against his server. Other parts of the books (hash cracking, tools) might allure aspirating script kiddies.

View details

Using a (host) reverse-proxy together with LXC application servers

posted in linux-stuff, virtualization on 07 Apr 2014,

The basic idea is to move application servers into LXC containers while keeping the HTTP server part (which is also responsible for hosting static files) on the host system.

Normally an incoming request would be handled by an HTTP server on the host as well as by an HTTP server on the virtualized client:

  browser -> http server(host) -> http server (guest) -> app-server (guest)

I’m configuring the host HTTP server to directly communicate with the app worker, thus:

   browser -> http server (host) -> app sever (guest)

This removes one layer of indirection and simplifies HTTP server configuration (think maximum file-sizes which would have to be adopted for each web server). This is also possible als LXC containers are located within the host filesystem (i.e. /var/lib/lxc/<container name>/rootfs): the host web server can thus directly access static files without even invocing the guest container in the first place.

View details

How to use convert an KVM image into a LXC container

posted in linux-stuff, virtualization on 22 Mar 2014,

KVM was an improvement over Xen for me. Still for many use-cases a LXC are a more performance, light-weight alternative – which also seems to be en vougue nowadays.

Through switching to LXC I’ve reduced my overall memory usage a bit – the main benefit is, that processes within an LXC container are separated processes within the host system. This should allow the host system to manage memory (think cache, buffers, swap, etc.) more efficiently.

I’ve started converting most of my trusted KVM images into LXC containers, this post contains the necessary steps.

View details

How to use virt-install to install new virtual machines within libvirt/kvm

posted in linux-stuff, virtualization on 20 Mar 2014,

I’ve been using KVM and virt-install to manage virtual machines on one of my servers, this post shows how to use virt-install.

View details

Rogue Access Point and SSL Man-in-the-Middle the easy way

posted in linux-stuff, pen-test, security on 24 Feb 2014,

After I’ve tried setting up a rogue access point using squid and hostapd I’ve seen that KDE’s network-manager offers host access-point functionality. How easy is it to combine this with BURP for an SSL man-in-the-middle attack? Well some GUI clicking and 3 command line invocations..

View details

How-to setup a rouge access point with a transparent HTTP(s) proxy

posted in linux-stuff, pen-test, security on 26 Jan 2014,

I’m always reading about dangerous rouge access points but never actually have seen one in action. So what better than create a test setup..

Hardware for this test setup will be * my old linux notebook (a macbook pro) as fake access point * a small deal extreme network card (Ralink 5070 chipset). I’ve actually bought three differnet wireless cards for under $20 and am trying out the different chipsets. This card is rather small (like an usb stick), so it isn’t to conspicous

The basic idea is to use hostap to create a virtual access point. Would I be a hypothetical attacker I’d call it ‘starbucks’, ‘freewave’ or name it like some coffee shop around the corner. I’m using the notebook’s included wireless card to provide an internet uplink. To achieve this I will have to compile a custom version of squid (including ssl support). I’m using Ubuntu 13.10 for this, other linux distributions would work the same.

View details

How to use FakeS3 for S3 testing

posted in linux-stuff, ruby-on-rails on 20 Oct 2013,

I’m contributing to a secure cloud project (well, it’s not that secure yet, but getting there..). It’s backend storage options include S3 so I want to test the S3-functionality against a locally installed S3 server.

I first tried to utilize OpenStack Object Storage (Swift) or Riak, but both solutions were rather heavy-weight and cumbersome to setup. Bear in mind, that I just wanted some fake S3 storage server which would be deployed within a local network (without any internet connection). So security, authentication, performance was mostly moot.

Then I came unto FakeS3. This is a simple Ruby gem which emulates an S3 server. Coming from a RoR world this seemed to be a perfect fit for me.

View details

Linux: How to force an application to use a given VPN tunnel

posted in linux-stuff, protect-your-data, security on 10 Oct 2013,

Somehow I have to use VPN services throughout the day:

  • when pen-testing from abroads I really need to login to my company’s network first. Otherwise my provider is kinda grumpy when I’m doing fast non-cloaked scans against large companies.
  • also when pen-testing I like to use some cloaking VPNs to test the client’s detection capabilities
  • if I would ever use bit-torrent I’d really like to make sure that the torrent program can only communicate through a private proxy (as pia).

The easy solution would be to connect the openvpn tunnels on startup and just route all the traffic through the tunnels. Alas this is way to slow for daily use – and somehow error prone: if a tunnel dies and some pen-test is currently under progress traffic might escape into ‘unsecured’ public networks. The same would be true for torrents.

View details

Git with transparent encryption

posted in linux-stuff, protect-your-data, security on 04 Jul 2013,

This is part three of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. “executive summary” of my findings. Stay tuned.

git was originally written by Linus Torvalds as SCM tool for the Linux Kernel. It’s decentralized approach fits well into online OSS projects, it slowly got the decentralized OSS of choice for many. Various dedicated hosted storage services as github or bitbucket arose. In this post I’ll look into using git as replacement for Dropbox for data sharing. As Dropbox has a devastating security history (link needed) I’ll look into ways of transparently encrypting remote git repositories.

View details

Encrypted S3 storage filesystems

posted in linux-stuff, protect-your-data, security on 27 Jun 2013,

This is part two of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. “executive summary” of my findings. Stay tuned.

This post tries some filesystems that directly access S3. I’ll focus on Amazon’s S3 offering, but there should be many alternatives, i.e. OpenStack. Amazon S3 has the advantage of unlimited storage (even if infinite storage would come with infinite costs..). S3 itself has become a de-facto standard for providing object-based file storage.

View details

Secure Online Data Backup using Duplicity

posted in linux-stuff, protect-your-data, security on 23 Jun 2013,

This is part two of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. “executive summary” of my findings. Stay tuned.

Duplicity is a command-line tool similar to rsync: you give it two locations and it synchronizes the first location to the second. Duplicity adds additional features over rsync, especially interesting for me are incremental encrypted backups to remote locations. This form of storage would prevent any hoster of gaining any information about my stored data or its metadata (like filenames, etc.).

Duplicity supports multiple storage backends, the most interesting for me were Amazon S3 and SSH/SFTP. All my examples will use the SFTP backend as I tend to have SSH servers laying around.

View details

Penetration testing

posted in security on 16 Feb 2013,

I am a RoR-developer gone pen-testing for the last couple of months. Clients range from smallish web portals to large multi-national financial institutions. So far I’ve a success rate well above 85%.

This post reflects upon my modus operandi. It contains a high-level view of how I work: while specific techniques change the overall frame-of-mind stays the same, so I consider the latter more important than the former. Also I hope for feedback regarding techniques and tools.

View details

Avoiding Internet/Network Surveillance

posted in linux-stuff, protect-your-data, security on 09 Dec 2012,

Last week’s World Conference on International Telecommunications (WCIT) brought internet surveillance into public news: one outcome of the conference was standardization of DPI technology. This infrastructure standard will make it easier for governments to implement large-scale surveillance and/or filtering. Funny thing is that governments are already having those capabilities, they only want to standardize it. The public outrage came too late.

So let’s protect you from governments at home or abroad, the RIAA, MPAA, random eavesdroppers and anyone else that want to listen in on your secrets while you’re surfing the Internet. The initial steps are easy and cheap (or free), so there’s no reason let your security down.

View details

Linux: How to encrypt your data on hard drives, USB sticks, etc.

posted in linux-stuff, protect-your-data, security on 02 Dec 2012,

Imagine your Laptop (or Desktop Computer) being stolen. How long will it take and how much will it cost you to get back on track? Hardware will be easy: the cost for a new premium desktop is around $1000, for a new Laptop around $2000. Your data “should” be always be back-uped somewhere anyways.

But this neglects a hidden cost: some thief has all your data, including all your online identities, photos, source for software projects and private notes/pictures that you do not want to be published. How much would you value your online reputation, would you change all your online account passwords and connected applications on theft? How much time and effort would this cost you – and could you do it fast enough before the attacker might utilize that data against you?

I’m employing transparent encryption to mitigate against this scenario. As long as sensitive data only hits my hard drive/SSDs encrypted nothing can be extracted by a thieve. This is done in a very lazy fashion: no additional password entry is used for integrated hard drives (i.e. /home), one password is used per external drive.

View details

Linux: How to forward port 3000 to port 80

posted in linux-stuff, ruby on rails on 18 Nov 2012,

Another small tip: to locally forward port 80 to port 3000 use the following Linux iptables command:

$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

You can use this command to allow customers to connect to your locally run Ruby on Rails setup (as long as you have some port forwarding set up on your local router). I am using this to develop facebook open graph apps as the application URL (that is configured within facebook’s app controll page) cannot include a custom port (like 3000).

View details

Postgres: Howto change owner for all tables

posted in linux-stuff, postgres on 11 Nov 2012,

Just a small tip for today: when moving an RoR-application between servers the database user often changes. While it is easy to dump and restore database dums using pg_dump and pg_restore this might lead to invalid table ownerships on the new host.

I’m using the following bash snippet for fixing this problem

View details

Moving OctoPress to Amazon S3 and CloudFront

posted in cloud, linux-stuff, ruby on rails, security on 03 Nov 2012,

OctoPress is embraced for its simplicity: write blog posts, save them, generate HTML pages and move those upon a web server. As no code is executed server-side every page can be cached and security risks are low.

So far I’m hosting my blog on a rented hetzner root-server in Germany. While there’s no server-side security problem I’m still using a full blown server which imposes maintenance overhead on me. No peace of mind. An alternative would be moving to the cloud (Amazon’s S3 storage in my case), but is it worth it?

In my experience just moving Octopress to S3 is not enough, it will be slower than the original setup. But add Amazon’s CloudFront content delivery network to the mix and everything changes..

View details

A full-powered shoebox-sized Desktop

posted in hardware, linux-stuff on 28 Oct 2012,

After three or four years it became time to replce my Desktop Computer with newer technology. I’ve got a first generation Intel Core i7-920 Octo-core processor: it still packs more than enough power but sadly gets too hot and thus the cooling system got too loud for my taste.

So time for a new Desktop! I decided to go the miniITX route. The main idea was to pack as much power-efficient technology in an as-small-as-possible case. This post describes my hardware experiences..

View details

The Lazy Engineer

posted in life on 01 May 2012,

Recently I’ve switched my working day to a more enjoyable pace – and noticed that my productivity rose too. Too many friends claimed that I’m just plain lazily so this post tries to clarify my mode of operation.

The basic idea is to reduce procrastination and improve my attention span through voluntary self-censorship.

View details

Generating PDFs with wicked_pdf

posted in ruby on rails on 26 Apr 2012,

Ruby on Rails is perfect for creating web applications but sometimes you just need to create some documents which can be stored or send through email. While printing is no problem with CSS not all users are able to “save/print page as pdf”. The ubiquitous Adobe PDF file format seems to be a perfect solution for this problem.

The common solution for this is Prawn (also see the RailsCast about it). Alas prawn uses a custom DSL for defining and styling the document and I would rather prefer to reuse existing view partials. princeXML seems to solve this: it transforms existing HTML/CSS into pdf. This allows to reuse existing views and partials but comes with a hefty price-tag of $3500+.

I’ll investigate wicked_pdf which takes the same approach as princeXML but comes free..

View details