Mittwoch, 23. April 2014

Heartbleed revisited

So, now the Heartbleed Bug is some days old and the work is nearly done.

So letst talk a bit about the history.
  1. on Sun, 1 Jan 2012 00:59:57 +0200 somebody committed an heartbeat extension to the openssl git repository.
  2. on last sunday/monday google found a bug within these extension, or at least finally reported about the bug.
  3. On late monday and early tuesday we had a tiny little website which could identify if your server is affected by these bug
  4. Some hours later a golang code was available , so we can check even more systems
  5. Finally a nmap plugin was available, and now we can scan the full ip range and every port
  6. Most SSL ca's were unable to handle the re-certification (or re-signing) of the certs via api, a solution was available on early thursday.
So what have we done within these lovely days (and what you should have done too)?
  1. We started to close the ssl vulnerability by patching all our systems
  2. We recreated our private ssl keys and recertificated these
  3. We started to call the Bug "Fingerbleed"-Bug
  4. We exchanged all of our certs
  5. We talked to some customers and helped them identify the bug and provided solutions
  6. We changed all our passwords

So finally i guess we can say it with Atkins lyrics

If you're goin' through hell keep on going
Don't slow down, if you're scared don't show it
You might get out before the devil even knows you're there

Mittwoch, 9. April 2014

Yet Another Heartbleed post

Well, as i spended hours on the heartbleed bug currently, i just need to tell you this:

Update your openssl libary now!

I guess the bug could be one of the most urgent bugs we had.

What you should do:

And dont forget to restart your apache/nginx/lighttpd/postfix/whatever server. 

As i normally maintain a higher range of ip addresses i just wrote a short script

Step 1: fping -g IPRANGE > ips.txt
Step2:  sh > test.txt 
for line in `cat ips.txt | cut -d ' ' -f1`
        ./heartbleeder $line:443
Step 3: login to the server and solve the issue

have fun!

Donnerstag, 3. April 2014

[Note to myself] Raspberry PI, playing around (Android, Chromeos)

So, now i have my raspberry pi for more than a week. I am still looking for something "awesome" to do with it. There are two operating systems which i really would like to us on it.
  • Chrome OS
  • Android
Well, as Android is normally build to run on ARM systems i thought it should be the first try. Sad enough the first try yesterday failed, and i still dont know why. The screen just stays black and the light on my raspi stay red. 

So, this blog post is mainly a "Note to myself" so i can find all the links again :-) Maybe some of you find it useful too 


So, so its a mix of link (1) and link (2). 
While doing all teh needed chroot stuff you can easily see that its a gentoo underneath, well okay, i can live with that. The compilation for the cross compiler takes a while.  Okay, actually everything takes a while.


Some useful links which i might need again
Well i tried the mentioned version, but it failed, it does not even boot with only a black screen. So i played around a bit more and took the kernel from my pidora SD card. So the kernel boots but goes into an endless boot loop.
As i know now, there are a lot of prerequisites you need for the Android Kernel, so the pidora kernel is just to "small". Well, i will try to compile it on my own. But without an working config to import it will be the hell of a work.
While i am writing my Blog entry, broadcom has given the source for acceleration drivers to the community and someone already solved the Quake II Quest and ported it
So, whats next?

Conclusion, as everything seems to be strange i guess best idea would be to take the pidora build (or maybe something smaller, we will see) and add the new dma driver (see broadcom link in Android) and build an own system. 

I think about something which includes
  • chrome
  • fvwm
Okay, so that's my link list,

Montag, 31. März 2014

AWS Amazon webservices

Last week i took a short look on the AWS cloud system, basically because there was a free 12 month offer (750 hours a month). There are just a few types supported by this offer, so i created a "micro" machine.

You can choose between different operating systems, such as RedHat , Centos an one of amazones own distribution (Amazon Linux AMI 2013.09.2), which basically is another "fork" of RedHat/Centos.

So starting with documentation, i always like documentation and there are some really good whitepapers around, you can find them here. I guess you also should take a look at the overview whitepaper which is here.
Amazon pushed a lot of services into their cloud, like many different databases (mongoDB, DynamoDB, MySQL) and load balancers. The system is full scalable, so there is no need to buy a huge infrastructure, you can scale it to your needs. The paying is based on the amount of resources you need.

Currently there are two promos running

  • one promo where amazon and Intel are giving you 600 hrs compute time
  • and a basic 720hrs and 12 months free usage here

AWS comes around with some really nice features, for example security, so when you create anew instance you _must_ create a public key to login, and be sure to save it you cant access it twice.

Lets talk about Amazon Linux AMI

  • kernel 3.4.73-64.112.amzn1.x86_64
  • you can use yum 
  • nginx is available in 1.4.3 release 1.14.amzn1
  • mysql in version 5.5.34
As it seems selinux is available but not installed by default. I dont know if this is good or bad. You can use the micro instance within the 720hrs free offer.

So lets try it!

Donnerstag, 27. März 2014

Vyatta Cluster

Have you ever tried vyatta?

Vyatta is an open source network operating system providing advanced IPv4 and IPv6 routing, stateful firewalling, IPSec and SSL OpenVPN, and more.

So basically its the software you want when it comes down to run your own core router or firewall. The configuration style is pretty close to JUNIPER. It has a creat structure and a very good autocomplete. There are two versions available an opensource and free to use one, and of course an enterprise version.

Now lets assume that you have your two machines installed, the installation is quite easy, you can run it from cd. login via user: "vyatta" and password: "vyatta" and type

install image

there is a short text interface which asks you some questions. Right after that you can reboot the system and enjoy the basics. Lets start with configuring the network.
ets say that our tw machines wil run with the ip and and we want to have on both machines as an failover address between these two machines. Currently we dont think about what service will use this address as it could be everything from ipsec to outside NAT.

set interfaces ethernet eth0 address <x.x.x.x/x>

is the command you use to set these addresses.
Now we setup an vyatta cluster via (!the numbers in braces are just for my documentation!)

set cluster group myfirstcluster (1)
set cluster group myfirstcluster primary 'first-router' (2)
set cluster group myfirstcluster secondary 'second router' (3)
set cluster group myfirstcluster service '' (4)
set cluster group myfirstcluster monitor ''(5)
set cluster interface 'eth0'(6)
set cluster keepalive-interval '2000' (6)
set cluster dead-interval '10000'(7)
set cluster pre-shared-secret '!somesecret!'(8)

So what do we do in here?
In (1) we just name our cluster so the instance will be "myfirstcluster". 
In (2) and (3) we definde the primary and secondary system, please set them to the name you have given to your systems. 
In (4) we set the service IP, so here we say that we want to have the ip on interface eth0 be the ip for the cluster instance.
(5) is just to add an monitor to the system. Wehenever one node cant connect to the other node it will check if the monitor is available, if not the node will not obtain the service ip as it seems that the machine itself has a problem. You can add as many nodes as you want to.
In (6) we definde the keepalive interval, so in which interval are keepalive pakets sended here it is set to 2000ms.
(7) the deadinterval, how many ms do we wait before we asume the node to be death.
And (8) of course we need a pass, as we dont want an other node to shutdown our system.

Basically you do this on both machines.

No something very useful. As vyatta is just an debian, you can always use "sudo -i" to be root and tcpdump or something like that. When being root you can also perform failovers by hand, you will find the scripts at:
  •  /usr/share/heartbeat/
    • hb_standby - will set the node into standby mode
    • hb_takeover - will let the node be master again