[ale] Suse 9.3 and fiber storage

Damon L. Chesser damon at damtek.com
Sat Apr 16 12:57:31 EDT 2011


On Sat, 2011-04-16 at 11:25 -0400, Lightner, Jeff wrote:
> There are levels of information you can get.
> 
> As the prior poster says you should first look at what you have mounted doing a df -h. (Assuming you're not using raw disks for databases). 
> 
> Are the mounted things showing up as SDs (disks or partitions)?  Logcial volumes?  Software raid (/dev/md)?   Once you know that you can look at the underlying things.   If it is /dev/sd* then running fdisk -l on the /dev/sd device (e.g. if you see /dev/sdc1 you can run fdisk on /dev/sdc).   If a volume group look at your vgdisplay -v output to see what disks it contains.  If software raid look at that setup.  (Even though you have hardware RAID in the Hitach itself it may be it was setup as software RAID in Linux too for some reason - not likely but not outside the realm of possibility.)
> 
> You can get information on your Qlogic setup using the Qlogic scli command.   
> 
> You can also get some information out of /proc/scsi/qla* (you should see an instance number for each card under there).
> 
> You mentioned Hitachi disks.   If you have /HORCM software installed you can get information using the inqraid command in the usr/bin directory under /HORCM.   (This will let you equate the array's LDEV IDs with the Linux system disk IDs.)   We use that fairly extensively here.
> 
> We don't use the Hitachi HDLM software but you may have it and if so it may give you some information. 

All good info and it gives me a starting point to poke around on the
target systems.  Thanks!
>  
> 
> -----Original Message-----
> From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Greg Freemyer
> Sent: Friday, April 15, 2011 7:51 PM
> To: damon at damtek.com; Atlanta Linux Enthusiasts
> Subject: Re: [ale] Suse 9.3 and fiber storage
> 
> For figuring out what you have:
> 
> You're getting too complex for what seems a simple job.
> 
> FC volumes are scsi normally, so /dev/sdb, etc is likely the drives.
> 
> Just like a physical drive, a FC volume can be used in whole or partitioned.
> 
> To get the full unpartitioned volume size, look in /sys/block/sdb/...
>  (You can also just call df.)
> 
> You should be able to get partition info from /proc/partitions
> 
> You should see all of your mount points the traditional way.  ie. Look
> in /etc/fstab and/or run mount.
> 
> The key thing is FC drives fit into the normal scheme at the level you
> are talking about.
> 
> You will have a little more fun setting up the new environment and
> mounting the volumes.
> 
> Also, you can't tell how the raid setup is done from the basic linux
> side.  (There may be management software that tells you, but that will
> be a separate thing.  Likely the storage guys have that info and you
> don't.  Trouble is they need to know more detail than just /dev/sdb
> nomenclature to know which volume you are talking about on their end.)
> 
> Greg
> 
> 
> 
> 
> On Fri, Apr 15, 2011 at 4:51 PM, Damon L. Chesser <damon at damtek.com> wrote:
> > I have a bunch of Suse 9.3 servers with various apps that need to be
> > migrated to RHEL 5 or 6.
> >
> > I have back end SANs attached via QLogic hbas.
> >
> > How do I verify how the attached storage is mounted (ie, this mount is
> > remote via the hba).
> >
> > There is no /etc/mulitpath.conf
> >
> > What I am looking for is a way I can get info and make a "map" that I
> > can duplicate on the new OS.  The new storage will be entirely new
> > partitions/luns on completely different LUNs, but the "structure" might
> > need to be the same, ie: /somemount is 17G /somemount2 is 15G etc.
> >
> > /dev/disk/by-* has by-id and by-uuid and by-path.
> >
> > I know this is both rather simple and broad, but I have zero fiber/HBA
> > experience and it would appear I don't know the proper search terms to
> > google.
> >
> > If it matters the back end is (old) is a Hitachi and I don't know the
> > front end.  I will not be tasked with slicing up the LUNs, but reporting
> > what sizes I need them to be, then mounting the partitions with the
> > proper mount points on the new OS.
> > --
> > Damon
> > damon at damtek.com
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://mail.ale.org/mailman/listinfo/ale
> > See JOBS, ANNOUNCE and SCHOOLS lists at
> > http://mail.ale.org/mailman/listinfo
> >
> 
> 
> 
> -- 
> Greg Freemyer
> Head of EDD Tape Extraction and Processing team
> Litigation Triage Solutions Specialist
> http://www.linkedin.com/in/gregfreemyer
> CNN/TruTV Aired Forensic Imaging Demo -
>    http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/
> 
> The Norcross Group
> The Intersection of Evidence & Technology
> http://www.norcrossgroup.com
> 
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>  
> Proud partner. Susan G. Komen for the Cure.
>  
> Please consider our environment before printing this e-mail or attachments.
> ----------------------------------
> CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you.
> ----------------------------------
> 
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo

-- 
Damon
damon at damtek.com



More information about the Ale mailing list