pathogen vs vundle

Pathogen was the first vim plugin management system that I've known of. The contender is Vundle which seems to be inspired in it's configuration syntax (and name) by Ruby's Bundler.

So let's compare those two.

Read on..

Capybara for automating Pen-Tests

After a successful penetration test a re-test is performed. The common approach is that the customer fixes the code and I perform the necessary steps to confirm that that initial security breach was closed. Sometimes it takes the customer a couple of tries to achieve that.

Most security problems (XSS, CSRF, SQLi) can easily be automated tested, but I had problems automating server-side authentication and authorization problems. The test would have to emulate multiple parallel user sessions. The tests mostly consists of one session trying to access the resources of another user session.

Seems like a good match for Capybara and Poltergeist.

Read on..

Migrating to Middleman

My blog has a history of migrations. It started as wordpress, then was converted Octopress. After Octopress was missing update-love and jekyll started to be actively maintained again it switched over to jekyll. And now, it finally is based upon Middleman.

Sorry for any inconvinient bugs or layout errors that will happen during the migration.

Read on..

Review: Penetration Testing with BackBox

Full-disclosure: I was asked by PacktPublishing to provide a review of Penetration Testing with BackBox by Stefan Umit Uygur. They offered me a free copy of the ebook; otherwise I have not been compensated by any means for this review.

The book aims to be an introduction to penetration-testing for experienced Unix/Linux users or administrators (seems like there are Linux users that aren't administrators by now). After reading the book I believe that the assumed use-case is an administrator that wants to gain some insight into the tools that might be used against his server. Other parts of the books (hash cracking, tools) might allure aspirating script kiddies.

Read on..

Using a (host) reverse-proxy together with LXC application servers

The basic idea is to move application servers into LXC containers while keeping the HTTP server part (which is also responsible for hosting static files) on the host system.

Normally an incoming request would be handled by an HTTP server on the host as well as by an HTTP server on the virtualized client:

  browser -> http server(host) -> http server (guest) -> app-server (guest)

I'm configuring the host HTTP server to directly communicate with the app worker, thus:

   browser -> http server (host) -> app sever (guest)

This removes one layer of indirection and simplifies HTTP server configuration (think maximum file-sizes which would have to be adopted for each web server). This is also possible als LXC containers are located within the host filesystem (i.e. /var/lib/lxc/<container name>/rootfs): the host web server can thus directly access static files without even invocing the guest container in the first place.

Read on..

How to use convert an KVM image into a LXC container

KVM was an improvement over Xen for me. Still for many use-cases a LXC are a more performance, light-weight alternative – which also seems to be en vougue nowadays.

Through switching to LXC I've reduced my overall memory usage a bit – the main benefit is, that processes within an LXC container are separated processes within the host system. This should allow the host system to manage memory (think cache, buffers, swap, etc.) more efficiently.

I've started converting most of my trusted KVM images into LXC containers, this post contains the necessary steps.

Read on..

How to use virt-install to install new virtual machines within libvirt/kvm

I've been using KVM and virt-install to manage virtual machines on one of my servers, this post shows how to use virt-install.

Read on..

Rogue Access Point and SSL Man-in-the-Middle the easy way

After I've tried setting up a rogue access point using squid and hostapd I've seen that KDE's network-manager offers host access-point functionality. How easy is it to combine this with BURP for an SSL man-in-the-middle attack? Well some GUI clicking and 3 command line invocations..

Read on..

How-to setup a rogue access point with a transparent HTTP(s) proxy

I'm always reading about dangerous rogue access points but never actually have seen one in action. So what better than create a test setup..

Hardware for this test setup will be * my old linux notebook (a macbook pro) as fake access point * a small deal extreme network card (Ralink 5070 chipset). I've actually bought three differnet wireless cards for under $20 and am trying out the different chipsets. This card is rather small (like an usb stick), so it isn't to conspicous

The basic idea is to use hostap to create a virtual access point. Would I be a hypothetical attacker I'd call it 'starbucks', 'freewave' or name it like some coffee shop around the corner. I'm using the notebook's included wireless card to provide an internet uplink. To achieve this I will have to compile a custom version of squid (including ssl support). I'm using Ubuntu 13.10 for this, other linux distributions would work the same.

Read on..

How to use FakeS3 for S3 testing

I'm contributing to a secure cloud project (well, it's not that secure yet, but getting there..). It's backend storage options include S3 so I want to test the S3-functionality against a locally installed S3 server.

I first tried to utilize OpenStack Object Storage (Swift) or Riak, but both solutions were rather heavy-weight and cumbersome to setup. Bear in mind, that I just wanted some fake S3 storage server which would be deployed within a local network (without any internet connection). So security, authentication, performance was mostly moot.

Then I came unto FakeS3. This is a simple Ruby gem which emulates an S3 server. Coming from a RoR world this seemed to be a perfect fit for me.

Read on..

Linux: How to force an application to use a given VPN tunnel

Somehow I have to use VPN services throughout the day:

  • when pen-testing from abroads I really need to login to my company's network first. Otherwise my provider is kinda grumpy when I'm doing fast non-cloaked scans against large companies.
  • also when pen-testing I like to use some cloaking VPNs to test the client's detection capabilities
  • if I would ever use bit-torrent I'd really like to make sure that the torrent program can only communicate through a private proxy (as pia).

The easy solution would be to connect the openvpn tunnels on startup and just route all the traffic through the tunnels. Alas this is way to slow for daily use – and somehow error prone: if a tunnel dies and some pen-test is currently under progress traffic might escape into 'unsecured' public networks. The same would be true for torrents.

Read on..

Git with transparent encryption

This is part three of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. "executive summary" of my findings. Stay tuned.

git was originally written by Linus Torvalds as SCM tool for the Linux Kernel. It's decentralized approach fits well into online OSS projects, it slowly got the decentralized OSS of choice for many. Various dedicated hosted storage services as github or bitbucket arose. In this post I'll look into using git as replacement for Dropbox for data sharing. As Dropbox has a devastating security history (link needed) I'll look into ways of transparently encrypting remote git repositories.

Read on..

Encrypted S3 storage filesystems

This is part two of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. "executive summary" of my findings. Stay tuned.

This post tries some filesystems that directly access S3. I'll focus on Amazon's S3 offering, but there should be many alternatives, i.e. OpenStack. Amazon S3 has the advantage of unlimited storage (even if infinite storage would come with infinite costs..). S3 itself has become a de-facto standard for providing object-based file storage.

Read on..

Secure Online Data Backup using Duplicity

This is part two of a series about encrypted file storage/archive systems. My plan is to try out duplicity, git using transparent encryption, s3-based storage systems, git-annex and encfs+sshfs as alternatives to Dropbox/Wuala/Spideroak. The conclusion will be a blog post containing a comparison a.k.a. "executive summary" of my findings. Stay tuned.

Duplicity is a command-line tool similar to rsync: you give it two locations and it synchronizes the first location to the second. Duplicity adds additional features over rsync, especially interesting for me are incremental encrypted backups to remote locations. This form of storage would prevent any hoster of gaining any information about my stored data or its metadata (like filenames, etc.).

Duplicity supports multiple storage backends, the most interesting for me were Amazon S3 and SSH/SFTP. All my examples will use the SFTP backend as I tend to have SSH servers laying around. Read on..

Penetration testing

I am a RoR-developer gone pen-testing for the last couple of months. Clients range from smallish web portals to large multi-national financial institutions. So far I've a success rate well above 85%.

This post reflects upon my modus operandi. It contains a high-level view of how I work: while specific techniques change the overall frame-of-mind stays the same, so I consider the latter more important than the former. Also I hope for feedback regarding techniques and tools.

Read on..

Avoiding Internet/Network Surveillance

Last week's World Conference on International Telecommunications (WCIT) brought internet surveillance into public news: one outcome of the conference was standardization of DPI technology. This infrastructure standard will make it easier for governments to implement large-scale surveillance and/or filtering. Funny thing is that governments are already having those capabilities, they only want to standardize it. The public outrage came too late.

So let's protect you from governments at home or abroad, the RIAA, MPAA, random eavesdroppers and anyone else that want to listen in on your secrets while you're surfing the Internet. The initial steps are easy and cheap (or free), so there's no reason let your security down. Read on..

Linux: How to encrypt your data on hard drives, USB sticks, etc.

Imagine your Laptop (or Desktop Computer) being stolen. How long will it take and how much will it cost you to get back on track? Hardware will be easy: the cost for a new premium desktop is around $1000, for a new Laptop around $2000. Your data "should" be always be back-uped somewhere anyways.

But this neglects a hidden cost: some thief has all your data, including all your online identities, photos, source for software projects and private notes/pictures that you do not want to be published. How much would you value your online reputation, would you change all your online account passwords and connected applications on theft? How much time and effort would this cost you – and could you do it fast enough before the attacker might utilize that data against you?

I'm employing transparent encryption to mitigate against this scenario. As long as sensitive data only hits my hard drive/SSDs encrypted nothing can be extracted by a thieve. This is done in a very lazy fashion: no additional password entry is used for integrated hard drives (i.e. /home), one password is used per external drive. Read on..

Linux: How to forward port 3000 to port 80

Another small tip: to locally forward port 80 to port 3000 use the following Linux iptables command:

$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

You can use this command to allow customers to connect to your...

Read on..

Postgres: Howto change owner for all tables

Just a small tip for today: when moving an RoR-application between servers the database user often changes. While it is easy to dump and restore database dums using pg_dump and pg_restore this might lead to invalid table ownerships on the new host...

Read on..

Moving OctoPress to Amazon S3 and CloudFront

OctoPress is embraced for its simplicity: write blog posts, save them, generate HTML pages and move those upon a web server. As no code is executed server-side every page can be cached and security risks are low.

So far I'm hosting my blog on a rented hetzner root-server in Germany. While there's no server-side security problem I'm still using a full blown server which imposes maintenance overhead on me. No peace of mind. An alternative would be moving to the cloud (Amazon's S3 storage in my case), but is it worth it?

In my experience just moving Octopress to S3 is not enough, it will be slower than the original setup. But add Amazon's CloudFront content delivery network to the mix and everything changes..

Read on..

A full-powered shoebox-sized Desktop

After three or four years it became time to replce my Desktop Computer with newer technology. I've got a first generation Intel Core i7-920 Octo-core processor: it still packs more than enough power but sadly gets too hot and thus the cooling system got too loud for my taste.

So time for a new Desktop! I decided to go the miniITX route. The main idea was to pack as much power-efficient technology in an as-small-as-possible case. This post describes my hardware experiences..

Read on..

The Lazy Engineer

Recently I've switched my working day to a more enjoyable pace – and noticed that my productivity rose too. Too many friends claimed that I'm just plain lazily so this post tries to clarify my mode of operation.

The basic idea is to reduce procrastination and improve my attention span through voluntary self-censorship.

Read on..

Generating PDFs with wicked_pdf

Ruby on Rails is perfect for creating web applications but sometimes you just need to create some documents which can be stored or send through email. While printing is no problem with CSS not all users are able to "save/print page as pdf". The ubiquitous Adobe PDF file format seems to be a perfect solution for this problem.

The common solution for this is Prawn (also see the RailsCast about it). Alas prawn uses a custom DSL for defining and styling the document and I would rather prefer to reuse existing view partials. princeXML seems to solve this: it transforms existing HTML/CSS into pdf. This allows to reuse existing views and partials but comes with a hefty price-tag of $3500+.

I'll investigate wicked_pdf which takes the same approach as princeXML but comes free..

Read on..