What is my WWN of my Storage Adapter

Posted on Tue, Mar 13, 2012 at 11:37 by Hubertus A. Haniel

I am frequently being asked on how one finds the WWN for the storage adapter on a modern Linux system with a 2.6 kernel.

This information used to be in /proc but now has been moved into /sys and can be found simply looking into /sys/class/fc_host/host*/portname (host* because you may have multible adapters eg host0 and host1)

On my system I get the following:

cat /sys/class/fc_host/host*/port_name
0x5000101000000414
0x5000101000000416

These numbers are required by your storage administrator to present storage to you in an enterprise class storage array.

Linux SCSI rescan? - Reboot!?

Posted on Tue, Dec 06, 2011 at 11:56 by Hubertus A. Haniel

Recently people keep asking me how do I add and remove storage from a Linux system running a 2.6 kernel and get linux to rescan the SCSI bus and add (or remove) storage dynamically with out rebooting.

So here is how to do it:

1 - Find the host number for the HBA:

ls /sys/class/fc_host/

You will have something like host1 or host2

2 - Ask the HBA to issue a LIP signal to rescan the FC bus:

echo 1 > /sys/class/fc_host/host1/issue_lip

3 - Wait for a few seconds for the LIP command to complete

4 - Ask the linux kernel to rescan the SCSI devices on that HBA

echo "- - -" > /sys/class/scsi_host/host1/scan

( - - - means every channel, every target and every lun )

Edited on: Tue, Dec 06, 2011 12:07

LXC - Linux Containers on OpenSuSE/SLES

Posted on Wed, Oct 05, 2011 at 9:27 by Hubertus A. Haniel

Recently somebody pointed me at LXC so I thought I give it a try.

As I mainly work on SuSE/SLES - I attempted this following the documentation at http://en.opensuse.org/LXC with a little extra help from http://lxc.teegra.net unsing OpenSuSE 11.4

At the time of me writing this the OpenSuSE guide did work pretty well but I had to make a few adjustments to get the container running properly:

  • To get the network to start up properly I had to comment out the paragraph that sets the mode to "onboot" in /etc/init.d/network so during boot it is just called with start. - This is a bit of a hack and may break things if the networking setup is a little more complex then a single interface. I also adjusted my config slightly to use DHCP rather then static addresses as that is a little easier to handle in my test environment.
  • This is not mentioned in the documentation I found but autofs has problems within a container. - My homedirectory gets mounted from a NFS server and autofs just seemed to hang while a hard nfs mount in fstab would work just fine.
  • Booting the environment came up with lots of errors about udev so I re-enabled that even though the documentation mentions that it should be taken out.

I have the advantage that I have puppet in my environment to adjust the configuration of systems to suit my test environment but a few things to make LXC on SuSE viable in a proper environment would be:

  • LXC/YaST intigration so AutoYaST templates could be fed into a container creation.
  • Currently there are no LXC templates for OpenSuSE or only the frameworks so one would have to create proper templates to use the lxc-create/lxc-destroy commands to create and destroy containers on the fly.
  • LXC is part of the SLES11 distribution but there does not seem to be any documentation what Novell would support inside a container in a production environment especially since I had to hack startup scripts in /etc/init.d so I think the startup scripts would need to be properly adjusted to be aware to do the right things if they are running inside a container. Hacking the start up scripts is not really an option as those changes may get reversed out during patching.

Other then the above concerns and gotchas LXC is a very interesting project and has the potential of Solaris Zones and offers for Linux a full compliment of Virtualisation technologies alongside UML (Not realy used any more), Xen and KVM.

Creating upgradable rpm packages (-Uvh)

Posted on Sat, Sep 24, 2011 at 11:23 by Hubertus A. Haniel

When creating rpm packages one should bear in mind that during a package upgrade the pre and post install and uninstall scripts are also run and it can happen that modifications are executed that you may not want to execute during the upgrade eg removal of files in a postuninstall which will actually still be needed after the upgrade. I belive the postuninstall is actually being run after the package has been upgraded. To get around this one should use the following framework:

%pre
if [ "$1" = "1" ]; then
Do stuff that should happen during initial installation.
elif [ "$1" = "2" ]; then
Do stuff that should happen during upgrade.
fi

%post
if [ "$1" = "1" ]; then
Do stuff that should happen during initial installation.
elif [ "$1" = "2" ]; then
Do stuff that should happen during upgrade.
fi

%preun
if [ "$1" = "0" ]; then
Do stuff that should happen during removal of the package.
elif [ "$1" = "1" ]; then
Do stuff that should happen during upgrade.
fi

%postun
if [ "$1" = "0" ]; then
Do stuff that should happen during removal of the package.
elif [ "$1" = "1" ]; then
Do stuff that should happen during upgrade.
fi



Edited on: Tue, Mar 13, 2012 11:34

Verification-time script in rpm

Posted on Fri, Sep 23, 2011 at 18:14 by Hubertus A. Haniel

This could be useful - found this in the rpm documentation while I was searching for something else so I thought I put it here for later :)

The %verifyscript executes whenever the installed package is verified by RPM's verification command. The contents of this script is entirely up to the package builder, but in general the script should do whatever is necessary to verify the package's proper installation. Since RPM automatically verifies the existence of a package's files, along with other file attributes, the %verifyscript should concentrate on different aspects of the package's installation. For example, the script may ensure that certain configuration files contain the proper information for the package being verified:

for n in ash bsh; do
   echo -n "Looking for $n in /etc/shells... "
   if ! grep "^/bin/${n}\$" /etc/shells > /dev/null; then 
      echo "missing"
      echo "${n} missing from /etc/shells" >&2
   else
      echo "found"
   fi
done

In this script, the config file /etc/shells, is checked to ensure that it has entries for the shells provided by this package.

It is worth noting that the script sends informational and error messages to stdout, and error messages only to stderr. Normally RPM will only display error output from a verification script; the output sent to stdout is only displayed when the verification is run in verbose mode.

Edited on: Sat, Sep 24, 2011 18:39

Top