Search This Blog

Saturday, 5 December 2015

megacli key commands in linux

List physical devices
megacli -PDlist -a1 |grep "Enclosure Device ID\|Slot Number\|Firmware state"

Assuming 6 spare PDs on Enclosure ID 32 with Slot nos 6-11:

Create RAID 6 array
megacli -CfgLdAdd -r6 [32:6,32:7,32:8,32:9,32:10,32:11] -a0

Create RAID 10 array
megacli -CfgSpanAdd -r10 -Array0[32:6,32:7] -Array1[32:8,32:9] -Array2[32:10,32:11] -a0
 

Check array status 
megacli -LDInfo -LAll -a0 

Delete array no. 2 (-Force may be required)
megacli -CfgLdDel -L2 -Force -a0

Extending an existing RAID array with a new disk
.megacli -LDRecon -Start -r5 -Add -PhysDrv[32:3] -L0 -a0

Creating a global hot spare
megacli -PDHSP -Set -PhysDrv [0:2] -a0

Show background initialization progress
MegaCli64 -LDBI -ShowProg -Lall -a0

Monday, 2 November 2015

Avoiding 'The resulting partition is not properly aligned for best performance' in parted

There are various posts with mathematical ways to do this which don't always work, eg if optimal_io_size is zero.
With newer versions of parted, just use -a (alignment type)
parted -a optimal <device>
Then in parted use % to allocate eg
(parted) mkpart <name> 0% 50%
Bingo- it now does it for you. You can check what it's done by setting units to sectors eg
(parted)unit s
(parted)p
Model: ATA WDC WD4001FAEX-0 (scsi)
Disk /dev/sdb: 7814037168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
 1      2048s  7814035455s  7814033408s              P1

Thursday, 22 October 2015

Exporting nfs version 3 through a firewall

nfs v4 uses only 1 port - 2049. Open that to tcp & udp in your firewall and you're good.
nfs v3 and before are a bit more complicated. As well as the main nfs port 2049, you need to allow access to the portmapper (fixed on port 111), mountd, lockd & statd ports- and possibly also the rquotad port- and apart from portmapper they aren't fixed by default. To fix them, so you can create appropriate firewall rules, edit (in RHEL/CentOS/ScientficLinux) /etc/sysconfig/nfs and add/modify these lines. The actual ports you use are pretty arbitrary:
RQUOTAD_PORT=762
LOCKD_TCPPORT=890
LOCKD_UDPPORT=890
MOUNTD_PORT=892
STATD_PORT=891

Restart nfs & nfslock, set up your firewall rules (for both tcp & udp as either can work on all protocols) and that should be that - except that lockd is implemented as a kernel thread. The LOCKD variables above cause writes to /proc/sys/fs/nfs/nlm_tcpport & /proc/sys/fs/nfs/nlm_udpport in /etc/init.d/nfs and /etc/init.d/nfslock but these don't seem to have any effect.

Alternatively, the debian handbook https://debian-handbook.info/browse/stable/sect.nfs-file-server.html suggests creating a modprobe file:
Example 11.24: The /etc/modprobe.d/lockd file
options lockd nlm_udpport=2045 nlm_tcpport=2045

Either way it seems you basically need to reboot to get it to work.
You can check your nfsports with rpcinfo -p which will also tell you which NFS versions and which IP protocols they are using. You can also run rpcinfo for a remote server to check that's all working - rpcinfo -p  <servername>.

Monday, 25 July 2011

force a linux reboot- when shutdown and reboot -f won't work

Just found this excellent post about how to restart a linux system that is ignoring both shutdown and reboot -f commands (in my case due to problems with an adaptec raid controller): http://linax.wordpress.com/2009/02/16/linux-force-reboot-and-shutdown/
In case it disappears, this is the content:

Linux Force Reboot and shutdown

Filed under: Linux — Nasser Heidari @ 09:42
Force Reboot :
#echo 1 > /proc/sys/kernel/sysrq
#echo b > /proc/sysrq-trigger
If you want to force shutdown machine try this.
#echo 1 > /proc/sys/kernel/sysrq
#echo o > /proc/sysrq-trigger

Friday, 15 October 2010

PGI makelocalrc fails - but doesnt really

If you've just installed the portland (PGI) compilers, you may get a message when you try to compile telling you to run 'makelocalrc'- but then when you try (once you've worked out the syntax)- you may get message like this

cp: cannot stat `/opt/pgi/linux86/10.6/lib/libpgbind_real.a': No such file or directory
cp: cannot stat `/opt/pgi/linux86/10.6/lib/libpgbind_real.so': No such file or directory
localrc has not changed

I spent a while searching about this before finding a site (in Japanese- thank you babelfish) that said- don't worry, it's been written really. And it had. WHY LIE PGI!

Friday, 27 August 2010

Firefox -Could not find compatible GRE between version 1.9.2.7 and 1.9.2.7

Trying to start firefox on a new system (Rocks 5.3 running Scientific Linux SL 5.5 <=> Red Hat RHEL 5.5), I got the not terribly helpful message "Could not find compatible GRE between version 1.9.2.7 and 1.9.2.7". I found the answer by a roundabout route, but in hindsight I should have attacked this with
rpm -qa | grep "1.9.2.7"
which reveals the only package at this revision level to be xulrunner. Doing yum list all xulrunner showed that only the i386 version of xulrunner is installed. This is a 64bit system, with both i386 & x86_64 firefox versions. Lo & behold, yum install xulrunner installed the 64 bit version as well and the problem went away.  I guess yum erase firefox.x86_64 would have worked too.
No prizes for the error message though...

Thursday, 19 August 2010

Kickstart cant repartion disks with software raid (md0 etc) arrays

If you're using kickstart to repartion disks with software raid arrays on them, you may get errors of the type 'no boot partion defined - may be due to lack of space'. I found this on a Rocks v5.3 Cluster running Scientific Linux 5.5 = Red Hat Enterprise Linux (RHEL) 5.5. Although kickstart has a 'clearpart' option that is supposed to clear existing partitions, it doesn't seem to do the job properly.

Solution- brutal but effective, make sure you don't want to keep anything off the disk: In the kickstart pre-install script (in rocks, put it in /export/rocks/install/site-profiles/5.3/nodes/replace-partition.xml  in the <pre> section) do

dd if-/dev/zero of=/dev/sda bs=1k count=64
dd if-/dev/zero of=/dev/sdb bs=1k count=64

as required to zero the partition tables of all the disks. It's not really necessary to zero anywhere near this much of the disk, but it take no time and I like to be thorough!

I saw mention of zeroing the raid superblocks with mdadm --zero-superblock <partition> but that  didn't seem to do it for me.