July 18th, 2012 | Categories: Solaris, Technology

I’ve seen anti-spoofing mentioned on #illumos and #openindiana a few times but googling it turns up little information. Although I didn’t check it out until rmustacc‘s adviced me to do so regarding a KVM IPv6 issue I’m facing.

After asking around about it and and some further research it also seems to go by the name of Link Protection in the Oracle Solaris documentation.

I only found this out by actually figuring out what I had to set using dladm and then googling the parameter.

So what is it and do I need it?
Link protection protects you from zones or (k)vm’s that try to behave badly. For example a vm could set a different IP or try to change it’s MAC-address. In most cases you do not want this to happen.

I’m sold! How do I use it?
Well you can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted.
They can be used in any combination you want.

  1. ip-nospoof: limit outgoing traffic from source IP’s learned through DHCP or the allowed-ips property.
  2. mac-nospoof: prevents zone admin from changing the mac address.
  3. dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.
  4. restricted: only allows IPv4, IPv6 and ARP protocols.

You can configure this using dladm set-linkprop command. You can find some practical examples in the quick reference section below.

For some reason this is disabled by default in most distributions with the exception of SmartOS.

Quick Reference

Check the current configuration:

dladm show-linkprop -p protection,allowed-ips vnic0

Disable link protection:

dladm reset-linkprop -p protection vnic0

Enable anti MAC-spoofing:

dladm set-linkprop -p protection=mac-nospoof vnic0

Enable anti IP-spoofing:

dladm set-linkprop -p protection=ip-nospoof vnic0
dladm set-linkprop -p allowed-ips=172.16.30.75,172.16.20.75 vnic0

Enable anti Client ID/DUID-spoofing:

dladm set-linkprop -p protection=dhcp-nospoof vnic0

This can be further restricted using allowed-dhcp-cids in similar fashion as allowed-ips. If allowed-dhcp-cids is not set, the interface’s MAC will be used.

I haven’t tested this one. From what I’ve understood it is useless if you also have mac-nospoof set. Since you can’t use the incorrect MAC to spoof your Client ID with. (comments welcome)

Restrict traffic to IPv4,IPv6 and ARP:

dladm set-linkprop -p protection=restricted vnic0

Combining them:[1]

dladm set-linkprop -p protection=mac-nospoof,restricted vnic0

Hopefully this has been useful for you all!

Comments Off on Solaris Antispoof / Link Protection
July 13th, 2012 | Categories: Solaris, Technology

As mentioned in my previous Solaris related post, I’ve recently started using Solaris.

What I didn’t mention is that I briefly use Oracle Solaris 11 11/11, but quickly went to OpenIndiana. I personally did not like the way Oracle is doing things. Plus the added benefit of KVM and some extra goodies here and there.

I will now show you one of the neat things that will probably never make it into Oracle Solaris.

svcs -L

This little switch for svcs[1] will print the logs for each service. Normally you need to use -x to then manually extract the path from the output.

Say you want to view the log for the build in SMB server, all you need to do on a Illumos based distribution is the following.

less `svcs -L smb/server`

This you can drive this further (get the 10 most recent touched log files, including the ones from zones):

ls -tal `svcs -LZ` | head -10

To quote Bryan M. Cantrill “Small tools doing well defined things.”, this is why I love UNIX.[2]

If you didn’t know about this already, you will love this! If you did know about it, you already love this!

Comments Off on Quick lookup of SMF log file
July 9th, 2012 | Categories: Linux, Networking, Technology

Here is a short tutorial on how to setup a multi stage authentication with an Apache HTTPd 2.4 server.

The first stage will do a 2-way SSL encryption, so both server and client will need to present a certificate.
The second stage will be password authentication, the username has to match the username in the CN of the client certificate.

This is similar to the Belgian eID, instead of needing a PIN to unlock the certificate.
We require a certificate and a password. Although this is not 100% the same, the security it offers is comparable for most purposes.

In my 2 test cases the password is queried from PAM, but a simple htpasswd file will work as well.

I assume you can configure a normal SSL based virtual host on httpd and have this working. I also assume you are able to sign and create the clients certificates yourself using openssl and a private CA. If there is demand, I may write a short post on that at a later date.

Below is a full configuration:[1]

Listen 8443

<VirtualHost *:8443>
	ServerName		secure.blackdot.be
	ServerAlias		nara

	CustomLog		/srv/http/logs/access_log common
	ErrorLog		/srv/http/logs/error_log

	SSLEngine		on
	SSLCertificateFile	/srv/http/ssl/server.crt
	SSLCertificateKeyFile	/srv/http/ssl/server.key
	SSLCACertificateFile	/srv/http/ssl/ca.crt
	SSLCARevocationFile	/srv/http/ssl/ca.crl

	SSLProtocol		TLSv1.2 TLSv1.1 TLSv1
	SSLCipherSuite		HIGH:!LOW:!aNULL:!MD5

	# php lockdown
	php_admin_value open_basedir "/srv/http/htdocs:/srv/http/tmp:/tmp"
	php_admin_value upload_tmp_dir "/srv/http/tmp"
	php_admin_value session.safe_path "/srv/http/tmp/sessions"


	<FilesMatch "\.(cgi|shtml|phtml|php)$">
		#SSLOptions +StdEnvVars +ExportCertData
		SSLOptions +StdEnvVars
	</FilesMatch>
	
	## Fancy SSL Authentication 
	# /: optional client ceritificate
	# /*: require client certificate + 2 step authentication
	# /gatekeeper: don't need client certificate
	DefineExternalAuth pwauth pipe /srv/http/bin/pwauth
	DocumentRoot	/srv/http/htdocs
	<Directory /srv/http/htdocs>
		SSLVerifyClient require
		
		AuthType Basic
		AuthName "Secure Area"
		AuthBasicProvider external
		AuthExternal pwauth	
	
		# work around a bug in the new authentication code, should be fixed in 2.4.3
		#<RequireAll>
		#	Require valid-user
		#	<RequireAny>
		#		Require user workaround_for_PR_52892
		#		Require expr ( \
		#			(%{SSL_CLIENT_S_DN_O} == "Blackdot") && \
		#			((%{SSL_CLIENT_S_DN_OU} == "Admins") || (%{SSL_CLIENT_S_DN_OU} == "Users")) && \
		#			(%{SSL_CLIENT_S_DN_CN} == %{REMOTE_USER}) \
		#		)
		#	</RequireAny>
		#</RequireAll>
		<RequireAll>
			Require valid-user
			Require expr ( \
				(%{SSL_CLIENT_S_DN_O} == "Blackdot") && \
				((%{SSL_CLIENT_S_DN_OU} == "Admins") || (%{SSL_CLIENT_S_DN_OU} == "Users")) && \
				(%{SSL_CLIENT_S_DN_CN} == %{REMOTE_USER}) \
			)
		</RequireAll>

	</Directory>

	# fix upload of large files (bug in renegotiate)
	<Directory /srv/http/htdocs>
		SSLRenegBufferSize 134217728
	</Directory>
</VirtualHost>

Server Certificate:

	SSLCertificateFile	/srv/http/ssl/server.crt
	SSLCertificateKeyFile	/srv/http/ssl/server.key
	SSLCACertificateFile	/srv/http/ssl/ca.crt
	SSLCARevocationFile	/srv/http/ssl/ca.crl

It is very important to include your CA, it is required for client certificate validation.
I also recommend setting the Certificate Revocation List, that way you can revoke access to compromised client certificates.

Enabling Client Certificate Authentication:

	<Directory /srv/http/htdocs>
		...
		SSLVerifyClient require
		...
	</Directory>

You can also set this to optional, this can be useful if you have a portal to retrieve the client certificate. But it’s safer to have it set to require for the entire virtual host.

Enabling HTTP Authentication:

	<Directory /srv/http/htdocs>
		...
		AuthType Basic
		AuthName "Secure Area"
		AuthBasicProvider external
		AuthExternal pwauth
		...
	</Directory>

This is the second stage of the authentication, I’m using mod_auth_external in combination with pwauth to authenticate against PAM.

This makes it easy for users with shell access to change their own passwords, however you probably want to use file base authentication instead. Every authentication provided should work.

Linking Client Certificate to the HTTP User:

	<Directory /srv/http/htdocs>
		...
		<RequireAll>
			Require valid-user
			Require expr ( \
				(%{SSL_CLIENT_S_DN_O} == "Blackdot") && \
				((%{SSL_CLIENT_S_DN_OU} == "Admins") || (%{SSL_CLIENT_S_DN_OU} == "Users")) && \
				(%{SSL_CLIENT_S_DN_CN} == %{REMOTE_USER}) \
			)
		</RequireAll>
		...
	</Directory>

Here is where the magic happens! This is why this only works on 2.4 branch: we can now nest requirements!
The <RequireAll> block will require all the require statements inside to validate to true, if not the request is rejected.

The valid-user should be self explaining. The real interesting stuff is the expr line. Here we will check if the following conditions are met for the client certificate:

  1. Organization matched ‘blackdot’
  2. OU matched ‘Admins’ or ‘Users’
  3. Common Name matched the username provided via HTTP Authentication

Of course the validity of the certificate is already checked by SSLVerifyClient.
This is probably where the most editing will be needed to get this to fit your needs.

The main example also has a alternative, with a workaround for pre 2.4.3 releases.

Fix file uploads:

	<Directory /srv/http/htdocs>
		SSLRenegBufferSize 134217728
	</Directory>

There seems to be some issues with file uploads, setting the SSLRenegBufferSize large enough seems to solve this.

That’s it, enjoy!

Comments Off on multi stage authentication using Apache HTTPd 2.4
July 9th, 2012 | Categories: Linux, Networking, Technology

I previously mentioned rpmbuild. I wanted to write a little post on what it does and how useful it is. But I though it would be easier to just show you.

So what is rpmbuild? It is a tool for building rpm packages based on a spec file. The resulting rpm should be close to the rpm provided by your distribution. I’ve used rpmbuild on both Fedora and CentOS to create Apache HTTPd 2.4 binaries that works as a drop in replacement for the system ones.

So how does it work? I’ll just show you!

You need to install rpmbuild first.[1]

nara ~ # yum install rpm-build
nara ~ # useradd -m -c "rpmbuild user" rpmbuild

Download the apr, apr-util and httpd source code from apache.org.
Then simply run rpmbuild against the tar archive.[2]

nara ~ # sudo su - rpmbuild
rpmbuild@nara ~ $ rpmbuild -ta httpd-2.4.2.tar.bz2
rpmbuild@nara ~ $ sudo rpm -Uvh rpmbuild/RPMS/x86_64/httpd-2.4.2-1.x86_64.rpm

This is far from a step by step, but it should give you a general idea what rpmbuild does and how it can be used for building binaries on CentOS or another rpm based distribution.

Rpmbuild will point out which packages you are missing, just install them and you should be fine.
You may also experience an error on the new apr-util, if you do get the patch posted here.

Comments Off on Apache HTTPd 2.4 on RPM based distributions
July 7th, 2012 | Categories: Hardware, Personal, Technology

I was looking for a new headphones/microphone combo. I’ve got a game PC again, so I play at night from time to time and keep my mother awake due to not having a headset.

I also needed something for Skype. For my Game PC I could use the on-board analog plugs. But things like watching movies and anime, listening to music,… I do from my MacBook Pro.

Although I could probably reach the analog ports if the headset had a long cable, my experience is that the plastic wires don’t generally like to be crammed together with a bunch of other cables. But to my surprise most seem to be USB these days. Not a big issue, my Logitech G110 keyboard has one of those analog to USB audio port converters. It works fine on OSX. But according to the internet, that is not always the case :s

I ended up going for the Corsair Vengeance 1500 Dolby 7.1 USB.

Although it was not mention on the Corsair product page that it worked on OSX, I did find some people who got it to work. I’m now one of those 🙂.

So what about quality? I’ll be short. Feels sturdy, mostly plastic with some metal but feels very nice. The cable is also one of those nylon ones, I really like those so that is a plus for me.

I’m no audiophile, but they sound pretty decent when listening to music. Nice deep bass, high notes seem clear as well.

I played some action scenes from some movies I had laying around, they sound wonderful too, as good or maybe even better than my external 6.1 speakers from Logitech. Do note this was on mac so I have no idea if the 7.1 was working or not.

The microphone is also very clear, way better than the bluetooth earpiece I’ve been using.

During gaming there is a slight crackle, however this was only in UT3. I did not have this issue with steam games.

I do have a slight discomfort where the headphones are a bit tight, but I have that with all headphones so I can’t really flow them for that.

Overall my first impression is very good.

Comments Off on Corsair Vengeance 1500 Dolby 7.1 USB
July 5th, 2012 | Categories: Networking, Technology

Here is a little trick to connect to a server that requires a SSL Client certificate when your client does not support it.

To make it work your client must be able to use a proxy. We will use this proxy to rewrite certain servers to a reverse proxy that injects the client certificate.

Lets get to the good stuff![1]

Lets start with the forward proxy, this is what will be configured in your browser, webapp…

Listen 8080

<VirtualHost *:8080>
	LogLevel			error
	ErrorLog			logs/error_log-fp
	TransferLog			logs/transfer_log-fp
	#RewriteLogLevel	9
	#RewriteLog			logs/rewrite_log-fp

	RewriteEngine		On
	RewriteRule		proxy:https://secure.blackdot.be/(.*)	  https://127.0.0.1:9901/$1	[P]
	RewriteRule		proxy:https://repository.blackdot.be/(.*) https://127.0.0.1:9902/$1	[P]

	ProxyRequests		On
	ProxyVia			On

	<Proxy *>
		Order allow,deny
		allow from all
		# you may want to narrow this down to only the client's IP
	</Proxy>
</VirtualHost>

We will filter out request for secure.blackdot.be and repository.blackdot.be, then we send the request to our reverse proxies. Other request will be passed along untouched. The proxy is listening on port 8080.

LoadModule ssl_module modules/mod_ssl.so

Listen 127.0.0.1:9901

<VirtualHost 127.0.0.1:9901>
	LogLevel			error
	ErrorLog			logs/error_log-rp
	TransferLog			logs/transfer_log-rp

	SSLProxyEngine					on
	SSLProxyCACertificateFile		conf/ssl/RP9901CA.crt
	SSLProxyMachineCertificateFile	conf/ssl/RP9901CERT.crt
	SSLProxyVerifyDepth				10
	SSLProxyVerify					none

	# IP app server
	ProxyRequests			Off
	ProxyPass				/	https://secure.blackdot.be/
	ProxyPassReverse		/	https://secure.blackdot.be/
</VirtualHost>

The reverse proxy on port 9901 will proxy the requests to secure.blackdot.be, it will offer the client certificate stored in conf/ssl/RP9901CERT.crt.[2]

You would create a similar reverse proxy on port 9902 for repository.blackdot.be.

This should be enough to get this working, if not… your probably shouldn’t be using SSL Client Authentication.

Comments Off on SSL Client certificate injection using reverse and forward proxies
July 4th, 2012 | Categories: Personal, Solaris, Technology

I’ve recently moved away from Linux to Solaris (OpenIndiana more specificly) for my NAS.

Although I like the Linux community, the Solaris one is totally different.
Note: I’m talking about the Illumos Kernel, Distro’s like SmartOS and OpenIndiana and projects OpenCSW.

It’s not a huge community, to me this is part of the appeal. You see a fair amount of the same people in the channels of the respective open source projects and efforts. This is very pleasant.

Overall I find them all to be very helpful!

Not the usually RTFM response you get in some Linux forums or channels on IRC. While for Linux the quickest way to get help is to troll them and say Windows can do XYZ why can’t Linux. Here you just need to ask nicely to be pointed in the right direction. (#solaris is the notable exception to this rule)

Okay you may not always get an answer you like, but it usually is the correct answer 🙂

Here is an awesome example of the helpfulness of the community:


And yes, I did get a nicely packages ZNC out of this. It require some back and forth between me and Jan, but it was a very pleasant experience!

So I’d like to thank you all for the very warm welcome!

If you have an urge to try Solaris, do give into it! But just skip the Oracle Solaris 11 11/11 stuff and get to the good stuff over at the distribution page on the Illumos wiki.

Update July 31st, 2012: while working on a comparison of Illumos distributions I got this response.
<meth> sjorge: What's up with the OI installer?
I replied by kindly stating what I think could be improved, most of those had a incident in there tracker already! Awesome.

Comments Off on Welcome to Solaris, have a beer.
June 30th, 2012 | Categories: Linux, Technology

I have one Arch server left at home, it is behaving badly. It has some fairly new stuff on it so I can’t replace it with CentOS.

I should be able to replace it with a new Fedora. So I hunted for a minimal install image… nothing! Netinstall? Ofcourse not!

Fedora is a desktop OS, the idea that you don’t want a GUI seems alien to them 🙁

Oh well, nothing to do about it. Let’s give them what they want… then clean house. Hey it’s MY os!

Filesystem                     Size  Used Avail Use% Mounted on
rootfs                          18G  1.9G   16G  11% /

What I did to get a minimal install with only a CLI and nothing fancy? Easy! Or maybe not 😕

  1. Boot the Xfce Live CD
  2. Install as normal
  3. Once logged on, open a terminal and do the following:
    1. yum grouplist | head -n20
    2. yum groupremove installedgroup (Remove all except the following: Text-based Internet and System Tools)
  4. Cleanup some more stuff
    1. yum remove -y NetworkManager-glib NetworkManager-gtk adwaita-cursor-theme adwaita-gtk2-theme adwaita-gtk3-theme avahi avahi-autoipd avahi-glib avahi-libs bluez-libs cheese-libs gnome-bluetooth-libs gnome-desktop3 gnome-menus gnome-themes-standard libX11 libX11-common libburn liberation-fonts-common libgphoto2 lm_sensors lm_sensors-libs mesa-dri-filesystem mesa-libGL mesa-libGLU mesa-libglapi mesa-libxatracker metacity xorg-x11-server-Xephyr xkeyboard-config xorg-x11-server-Xephyr xorg-x11-server-common xulrunner
  5. Install some essentials
    1. dhclient eth0 (didn’t want to do the static configuration yet)
    2. yum install -y yum-utils parted htop tmux screen
  6. Enable sshd
    1. systemctl start sshd.service
    2. systemctl enable sshd.service
    3. nano /etc/sysconfig/iptables /etc/sysconfig/ip6tables
      1. add: -A INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
  7. Update the system
    1. yum update -y
  8. Reboot

You now have a very small Fedora install without X Window System or a desktop environment. Just the way I like it.

Quick reference: http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet — I use systemd far to little.

June 30th, 2012 | Categories: Linux, Personal, Technology

As you have guessed, I’m a linux geek. I used to be of the bearded kind even.

So here is a bit of my history with linux for those who are interested.

It all started back in the day with my old Compaq laptop somewhere around 1999, ‘t was the eve before Win2k. A Mandrake disk came with a computer magazine, my laptop was broken so I though I’d give it a try. Most things worked, aside from X11. Due to it being a laptop, it had bad support back in the day.

I’ve gradually used linux more and more to the point I now use it nearly exclusive for all my servers. I use it most other hardware too. My macbook pro excluded.

I’ve seen the 2.2, 2.4,2.6 and 3.0 kernel release and it was a delight to see the hardware support grow with each one of them! I’ve also switched distributions a lot over the years, here is a list:

  1. Mandrake (My first steps with linux, console only)
  2. Debian (Yay… some form of X11! Oh what is this KDE you speak of… I don’t like it though)
  3. Gentoo (Freedom! control, bleeding edge and mostly the cursing at broken systems)
  4. Ubuntu (My first netbook. Great selection of packages, but too little control.)
  5. RHEL (54+) / CentOS 5+ / Fedora (I’ll lump them together, mostly for work.)
  6. Arch (Simplicity, control, still fairly bleeding edge, just less bleeding and cursing than Gentoo)

Believe it or not, blackdot.be ran on Gentoo until the last migration. Now it runs on CentOS.
I’ve come to value my time more lately. Stability is good.

Zimbra works fine on CentOS 6. I only had to compile httpd 2.4 branch. Oh boy the joys of rpmbuild! Expect an article on this in the future.

Why this nostalgia all of a sudden, well it’s good to know ones roots. Most new linux users have always known GUI’s. But I’m a CLI man. This will explain my upcoming post!

Arch, you have a special place in my live. You are perfect! But, for now CentOS will run on my servers.

Hopefully you have enjoyed this bit of trivia and nostalgia!

Comments Off on Linux and I
June 28th, 2012 | Categories: Networking, Personal, Technology

Due to popular demand (I got 1 e-mail, but hey they’ve been gone for a few days).

I’ve added some of the old Apache HTTPd 64-bit binaries to the download page under archive.
Please use them with care. It is possible I’ll redo the compile tutorial at some point but I currently don’t have access to a compiler.

Enjoy!

Comments Off on Apache HTTPd 64-bit binaries