Bienvenidos

Todos estos pasos descriptos fueron probados en ambientes productivos

lunes, 28 de marzo de 2011

Movimiento de zonas entre equipos


En el siguiente escenario :
Equipo A, un dominio con 5 zonas , tuve que mover una de las zonas a un nuevo equipo ( equipo B ) con distinto storage.
Equipo B, tiene un release nuevo de solaris, asi que tengo que actualizar la zona.

En el equipo B
mkdir -p /export/zona3
mount /dev/md/dsk/d103 /export/zona3
chmod 700 /export/zona3
crear la zona,
zonecfg -z zona3
create -b
set zonepath=/export/zona3
set autoboot=true
add net
set address=10.78.1xx.144
set physical=ce2
end
Luego, en el equipo A copio el sistema operativo de la zona, a la zona global del equipo B
ufsdump 0f - /export/zona3/ |ssh 10.78.1xx.143 "(cd /export/zona3; ufsrestore -rf -)"

Vuelvo al equipo B

zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- zona3 installed /export/zona3

Actualizo la zona 3, porque la zona global del equipo original, tiene un release mas viejo que la zona global del nuevo.
zoneadm -z zona3 attach -u
Luego en la global del equipo B,montar los filesystems en cualquier punto de montaje, para poder hacer la copia

Luego parado en equipo A,
Bajar todos los procesos de la zona 3 ( no apagarla )
Copiar a la zona global del equipo B
ufsdump 0f - /export/zona3/ |ssh 10.78.1xx.143 "(cd /export/zona3; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u00 |ssh 10.78.1xx.143 "(cd /ZONA3_u00; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u01 |ssh 10.78.1xx.143 "(cd /ZONA3_u01; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u02 |ssh 10.78.1xx.143 "(cd /ZONA3_u02; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u03 |ssh 10.78.1xx.143 "(cd /ZONA3_u03; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u04 |ssh 10.78.1xx.143 "(cd /ZONA3_u04; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u05 |ssh 10.78.1xx.143 "(cd /ZONA3_u05; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u06 |ssh 10.78.1xx.143 "(cd /ZONA3_u06; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u07 |ssh 10.78.1xx.143 "(cd /ZONA3_u07; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u08 |ssh 10.78.1xx.143 "(cd /ZONA3_u08; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u09 |ssh 10.78.1xx.143 "(cd /ZONA3_u09; ufsrestore -rf -)"
ufsdump 0f - /export/zona3/root/u10 |ssh 10.78.1xx.143 "(cd /ZONA3_u10; ufsrestore -rf -)"


Una vez finalizada la copia , en el equipo A :
zlogin zona3 y APAGAR la ZONA3.
Luego comentar en el /etc/zones/index, la linea que dice zona3 ( para evitar que en un booteo del equipo levante esta zona)

En el equipo B
desmontar todos los /ZONA3* porque estos deben montar dentro de la zona
luego
zoneadm -z zona3 boot
zlogin zona3
chequear que este todo levantado

Problemas con los que me encontre.
Al intentar levantarla, a zona quedaba parada con los servicios del sysidnet. y no seguia.
Solucion :
touch /etc/.UNCONFIGURED
puse en 0 (cero) la opcion # System previously configured? del archivo /etc/.sysIDtool.state
y corri en forma manual todo lo que hace el sysidtool
/usr/sbin/sysidnet
/usr/sbin/sysidns
/usr/sbin/sysidsys
/usr/sbin/sysidroot
/usr/sbin/sysidpm
/usr/sbin/sysidnfs4
/usr/sbin/sysidkbd

Luego de eso, la zona siguio levantando en forma normal.

Otro problemita ,
Tenia 9 instancias de oracle ejecutandose originalmente, pero en esta nueva zona,solo levantaba 7 instancias,y abortaba por falta de memoria.
Asi que desde la zona global le corrimos un
projmod -s -K "project.max-shm-memory=(priv,8192MB,deny)" oracle_SIEBEL
y levantaron todas las bases como lo esperaba.
Hasta este punto, anduvo todo perfecto, pero... cuando quisimos instalar oracle 11g, el instalador hace un chequeo para ver que release de solaris tiene instalado, pero NO usa el /etc/release sino que usa este comando
/usr/bin/pkginfo -l SUNWsolnm | /usr/bin/nawk -F= '/VERSION/ {"/usr/bin/uname -r" | getline uname; print uname "-" $2}'es decir, chequea el paquete SUNWsolnm, que es el encargado de actualizar el release, y el zoneadm attach -u NO upgradea este paquete.
Solucion :
Si la zona global tiene de los ultimos release de solaris 10, ejecutar el attach con la opcion -U ( mayusculas),  si vamos a usar esta opcion, tener en cuenta que modifica algunos archivos, como el /etc/hosts y el sudoers , que reemplaza el de la zona actualizada por los de la global.
Si tengo un release viejo, seguir estos otros pasos.
Backup de la zona.
dentro de la zona, ejecutar un init 0
zoneadm -z zona3 uninstall -F ( esto borra Toda la zona )
Genero la zona desde cero, como esta descripto al principio.
y luego
zoneadm -z zona3 install
luego un restore selectivo de algunos archivos de configuracion, passwd, shadow, group, todo el home de los usuarios, resolv.conf , hosts, y scripts de inicio residentes en el /etc/init.d
zoneadm -z zona3 boot
y listo

miércoles, 23 de marzo de 2011

Balanceador de discos de IBM


A continuacion un ejemplo de instalacion del balanceador de discos de Ibm, el IBMsdd :
Se instala con el pkgadd -d
# pkgadd -d .

The following packages are available:
1 IBMsdd IBMsdd Driver 64-bit Solaris 10 Version: 1.6.5.0-0 Oct-01-2008 16:10
(sparc) 1.6.5.0-0

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:

Processing package instance from

IBMsdd Driver 64-bit Solaris 10 Version: 1.6.5.0-0 Oct-01-2008 16:10(sparc) 1.6.5.0-0
Copyright IBM Corporation 2001
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of [y,n,?] y

Installing IBMsdd Driver 64-bit Solaris 10 Version: 1.6.5.0-0 Oct-01-2008 16:10 as

## Executing preinstall script.
/var/sadm/pkg/IBMsdd/install/preinstall: pre install running
## Installing part 1 of 1.
/etc/cfgvpath
/etc/defvpath
/etc/sample_sddsrv.conf
/kernel/drv/sparcv9/vpathdd
/kernel/drv/vpathdd.conf
/lib/svc/method/ibmsddinit
/opt/IBMsdd/bin/cfgvpath
/opt/IBMsdd/bin/datapath
/opt/IBMsdd/bin/defvpath
/opt/IBMsdd/bin/get_root_disk
/opt/IBMsdd/bin/pathtest
/opt/IBMsdd/bin/rmvpath
/opt/IBMsdd/bin/sddgetdata
/opt/IBMsdd/bin/sddgetwwpn
/opt/IBMsdd/bin/sddprutil
/opt/IBMsdd/bin/setlicense
/opt/IBMsdd/bin/showvpath
/opt/IBMsdd/bin/vpathmkdev
/opt/IBMsdd/devlink.vpath.tab
/opt/IBMsdd/etc.profile
/opt/IBMsdd/etc.system
/opt/IBMsdd/vpath.msg
/opt/IBMsdd/vpathexcl.cfg
/sbin/sddsrv
/usr/lib/sddlib.so
/usr/sbin/vpathmkdev
/var/svc/manifest/system/ibmsdd/ibmsdd-init.xml
[ verifying class ]
## Executing postinstall script.
Vpath: Configuring 0 devices (0 disks * 8 slices)

Installation of was successful.

cfgadm -f -c configure c1 y c2 ( como siempre )
luego ejecuto estos comandos para generar los devices balanceados:
/opt/IBMsdd/bin/cfgvpath
/opt/IBMsdd/bin/vpathmkdev ( este es el que genera los devices en el format )

Para agregar un disco a un volumen concatenado hice lo siguiente:
[asun005] /opt/IBMsdd/bin # metastat -p d100
d100 2 1 /dev/dsk/vpath1a \
1 /dev/dsk/vpath2a
[asun005] /opt/IBMsdd/bin # metattach d100 /dev/dsk/vpath3a
d100: component is attached
[asun005] /opt/IBMsdd/bin # metastat -p d100
d100 3 1 /dev/dsk/vpath1a \
1 /dev/dsk/vpath2a \
1 /dev/dsk/vpath3a
[asun005] /opt/IBMsdd/bin # df -h /BACKUPS
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d100 504G 482G 17G 97% /BACKUPS

[asun005] /opt/IBMsdd/bin # growfs -M /BACKUPS /dev/md/rdsk/d100
Warning: 2048 sector(s) in last cylinder unallocated
/dev/md/rdsk/d100: 1597898752 sectors in 260075 cylinders of 48 tracks, 128 sectors
780224.0MB in 16255 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
...............................................................................
........
super-block backups for last 10 cylinder groups at:
1596955296, 1597053728, 1597152160, 1597250592, 1597349024, 1597447456,
1597545888, 1597644320, 1597742752, 1597841184
[asun005] /opt/IBMsdd/bin # df -h /BACKUPS
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d100 750G 482G 263G 65% /BACKUPS
[asun005] /opt/IBMsdd/bin #


Notas :
/opt/IBMsdd/bin/datapath query device (sacamos el numero de serie que tiene el UID )
A continuacion, un detallado paso a paso de asignacion y creacion de nuevos metadevices y un problema y solucion con un vpath.

Me asignaron 6 discos de 209gb 1 disco de 32gb y 1 de 12gb
[coneja] /opt/IBMsdd/bin # format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t9d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@9,0
1. c0t10d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@a,0
2. c0t11d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@b,0
Specify disk (enter its number): ^C
[coneja] /opt/IBMsdd/bin # cfgadm -la
Ap_Id Type Receptacle Occupant Condition
IO4 unknown empty unconfigured unknown
IO5 unknown empty unconfigured unknown
IO6 unknown empty unconfigured unknown
IO7 unknown empty unconfigured unknown
IO8 unknown empty unconfigured unknown
IO10 HPCI+ disconnected unconfigured unknown
IO12 HPCI+ connected configured ok
IO12::pci0 io connected configured ok
IO12::pci1 io connected configured ok
IO12::pci2 io connected configured ok
IO12::pci3 io connected configured ok
IO12_C3V0 fibre/hp connected configured ok
IO12_C3V1 fibre/hp connected configured ok
IO12_C3V2 pci-pci/hp connected configured ok
IO13 unknown empty unconfigured unknown
IO16 unknown empty unconfigured unknown
IO17 unknown empty unconfigured unknown
SB4 unknown empty unconfigured unknown
SB5 unknown empty unconfigured unknown
SB6 unknown empty unconfigured unknown
SB7 unknown empty unconfigured unknown
SB8 unknown empty unconfigured unknown
SB12 V3CPU connected configured ok
SB12::cpu0 cpu connected configured ok
SB12::cpu1 cpu connected configured ok
SB12::cpu2 cpu connected configured ok
SB12::cpu3 cpu connected configured ok
SB12::memory memory connected configured ok
SB13 unknown empty unconfigured unknown
SB16 unknown empty unconfigured unknown
SB17 unknown empty unconfigured unknown
c0 scsi-bus connected configured unknown
c0::dsk/c0t9d0 disk connected configured unknown
c0::dsk/c0t10d0 disk connected configured unknown
c0::dsk/c0t11d0 disk connected configured unknown
c0::es/ses0 processor connected configured unknown
c1 fc-fabric connected configured unknown
c1::5005076801303680 disk connected configured unknown
c1::50050768013036a1 disk connected configured unknown
c1::5005076801403680 disk connected configured unknown
c1::50050768014036a1 disk connected configured unknown
c2 fc-fabric connected configured unknown
c2::5005076801103680 disk connected configured unknown
c2::50050768011036a1 disk connected configured unknown
c2::5005076801203680 disk connected configured unknown
c2::50050768012036a1 disk connected configured unknown
c3 scsi-bus connected configured unknown
c3::rmt/0 tape connected configured unknown
c4 fc connected unconfigured unknown
c5 fc connected unconfigured unknown
usb0/1 unknown empty unconfigured ok
usb0/2 unknown empty unconfigured ok
usb0/3 unknown empty unconfigured ok
usb0/4 unknown empty unconfigured ok
[coneja] /opt/IBMsdd/bin #
[coneja] /opt/IBMsdd/bin # cfgadm -f -c configure c1
[coneja] /opt/IBMsdd/bin # cfgadm -f -c configure c2
[coneja] /opt/IBMsdd/bin # format
Searching for disks...done

c1t50050768013036A1d0: configured with capacity of 12.00GB
c1t50050768014036A1d0: configured with capacity of 12.00GB
c1t50050768013036A1d1: configured with capacity of 31.98GB
c1t50050768014036A1d1: configured with capacity of 31.98GB
c1t50050768014036A1d2: configured with capacity of 209.98GB
c1t50050768013036A1d2: configured with capacity of 209.98GB
c1t50050768013036A1d3: configured with capacity of 209.98GB
c1t50050768014036A1d3: configured with capacity of 209.98GB
c1t50050768014036A1d4: configured with capacity of 209.98GB
c1t50050768013036A1d4: configured with capacity of 209.98GB
c1t50050768013036A1d5: configured with capacity of 209.98GB
c1t50050768014036A1d5: configured with capacity of 209.98GB
c1t50050768013036A1d6: configured with capacity of 209.98GB
c1t50050768014036A1d6: configured with capacity of 209.98GB
c1t50050768013036A1d7: configured with capacity of 209.98GB
c1t50050768014036A1d7: configured with capacity of 209.98GB
c1t5005076801303680d0: configured with capacity of 12.00GB
c1t5005076801403680d0: configured with capacity of 12.00GB
c1t5005076801403680d1: configured with capacity of 31.98GB
c1t5005076801303680d1: configured with capacity of 31.98GB
c1t5005076801303680d2: configured with capacity of 209.98GB
c1t5005076801403680d2: configured with capacity of 209.98GB
c1t5005076801303680d3: configured with capacity of 209.98GB
c1t5005076801403680d3: configured with capacity of 209.98GB
c1t5005076801403680d4: configured with capacity of 209.98GB
c1t5005076801303680d4: configured with capacity of 209.98GB
c1t5005076801303680d5: configured with capacity of 209.98GB
c1t5005076801403680d5: configured with capacity of 209.98GB
c1t5005076801403680d6: configured with capacity of 209.98GB
c1t5005076801303680d6: configured with capacity of 209.98GB
c1t5005076801403680d7: configured with capacity of 209.98GB
c1t5005076801303680d7: configured with capacity of 209.98GB
c2t50050768012036A1d0: configured with capacity of 12.00GB
c2t50050768011036A1d0: configured with capacity of 12.00GB
c2t50050768012036A1d1: configured with capacity of 31.98GB
c2t50050768011036A1d1: configured with capacity of 31.98GB
c2t50050768011036A1d2: configured with capacity of 209.98GB
c2t50050768012036A1d2: configured with capacity of 209.98GB
c2t50050768011036A1d3: configured with capacity of 209.98GB
c2t50050768012036A1d3: configured with capacity of 209.98GB
c2t50050768011036A1d4: configured with capacity of 209.98GB
c2t50050768012036A1d4: configured with capacity of 209.98GB
c2t50050768011036A1d5: configured with capacity of 209.98GB
c2t50050768012036A1d5: configured with capacity of 209.98GB
c2t50050768012036A1d6: configured with capacity of 209.98GB
c2t50050768011036A1d6: configured with capacity of 209.98GB
c2t50050768011036A1d7: configured with capacity of 209.98GB
c2t50050768012036A1d7: configured with capacity of 209.98GB
c2t5005076801103680d0: configured with capacity of 12.00GB
c2t5005076801203680d0: configured with capacity of 12.00GB
c2t5005076801103680d1: configured with capacity of 31.98GB
c2t5005076801203680d1: configured with capacity of 31.98GB
c2t5005076801203680d2: configured with capacity of 209.98GB
c2t5005076801103680d2: configured with capacity of 209.98GB
c2t5005076801103680d3: configured with capacity of 209.98GB
c2t5005076801203680d3: configured with capacity of 209.98GB
c2t5005076801203680d4: configured with capacity of 209.98GB
c2t5005076801103680d4: configured with capacity of 209.98GB
c2t5005076801203680d5: configured with capacity of 209.98GB
c2t5005076801103680d5: configured with capacity of 209.98GB
c2t5005076801103680d6: configured with capacity of 209.98GB
c2t5005076801203680d6: configured with capacity of 209.98GB
c2t5005076801203680d7: configured with capacity of 209.98GB
c2t5005076801103680d7: configured with capacity of 209.98GB


AVAILABLE DISK SELECTIONS:
0. c0t9d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@9,0
1. c0t10d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@a,0
2. c0t11d0
/pci@19c,700000/pci@1/pci@1/scsi@2/sd@b,0
3. c1t50050768013036A1d0
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,0
4. c1t50050768014036A1d0
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,0
5. c1t50050768013036A1d1
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,1
6. c1t50050768014036A1d1
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,1
7. c1t50050768014036A1d2
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,2
8. c1t50050768013036A1d2
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,2
9. c1t50050768013036A1d3
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,3
10. c1t50050768014036A1d3
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,3
11. c1t50050768014036A1d4
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,4
12. c1t50050768013036A1d4
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,4
13. c1t50050768013036A1d5
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,5
14. c1t50050768014036A1d5
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,5
15. c1t50050768013036A1d6
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,6
16. c1t50050768014036A1d6
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,6
17. c1t50050768013036A1d7
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768013036a1,7
18. c1t50050768014036A1d7
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768014036a1,7
19. c1t5005076801303680d0
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,0
20. c1t5005076801403680d0
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,0
21. c1t5005076801403680d1
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,1
22. c1t5005076801303680d1
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,1
23. c1t5005076801303680d2
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,2
24. c1t5005076801403680d2
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,2
25. c1t5005076801303680d3
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,3
26. c1t5005076801403680d3
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,3
27. c1t5005076801403680d4
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,4
28. c1t5005076801303680d4
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,4
29. c1t5005076801303680d5
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,5
30. c1t5005076801403680d5
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,5
31. c1t5005076801403680d6
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,6
32. c1t5005076801303680d6
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,6
33. c1t5005076801403680d7
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801403680,7
34. c1t5005076801303680d7
/pci@19c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801303680,7
35. c2t50050768012036A1d0
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,0
36. c2t50050768011036A1d0
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,0
37. c2t50050768012036A1d1
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,1
38. c2t50050768011036A1d1
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,1
39. c2t50050768011036A1d2
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,2
40. c2t50050768012036A1d2
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,2
41. c2t50050768011036A1d3
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,3
42. c2t50050768012036A1d3
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,3
43. c2t50050768011036A1d4
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,4
44. c2t50050768012036A1d4
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,4
45. c2t50050768011036A1d5
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,5
46. c2t50050768012036A1d5
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,5
47. c2t50050768012036A1d6
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,6
48. c2t50050768011036A1d6
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,6
49. c2t50050768011036A1d7
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768011036a1,7
50. c2t50050768012036A1d7
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50050768012036a1,7
51. c2t5005076801103680d0
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,0
52. c2t5005076801203680d0
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,0
53. c2t5005076801103680d1
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,1
54. c2t5005076801203680d1
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,1
55. c2t5005076801203680d2
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,2
56. c2t5005076801103680d2
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,2
57. c2t5005076801103680d3
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,3
58. c2t5005076801203680d3
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,3
59. c2t5005076801203680d4
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,4
60. c2t5005076801103680d4
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,4
61. c2t5005076801203680d5
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,5
62. c2t5005076801103680d5
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,5
63. c2t5005076801103680d6
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,6
64. c2t5005076801203680d6
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,6
65. c2t5005076801203680d7
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801203680,7
66. c2t5005076801103680d7
/pci@19d,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076801103680,7
Specify disk (enter its number):
ACA viene el tema, porque hasta ahora no veo los discos por VPATH, despues de estos comandos locos, recien los empiezo a ver.
[coneja] /opt/IBMsdd/bin # cfgvpath
Vpath: Configuring 64 devices (8 disks * 8 slices)
[coneja] /opt/IBMsdd/bin #
[coneja] /opt/IBMsdd/bin # vpathmkdev
[coneja] /opt/IBMsdd/bin #
Hago un extracto de la salida del format, para hacerlo mas corto.
[coneja] /opt/IBMsdd/bin # format
Searching for disks...done

c1t50050768014036A1d0: configured with capacity of 12.00GB
c1t50050768013036A1d0: configured with capacity of 12.00GB
c1t50050768014036A1d1: configured with capacity of 31.98GB
c1t50050768013036A1d1: configured with capacity of 31.98GB
c1t50050768014036A1d2: configured with capacity of 209.98GB
c1t50050768013036A1d2: configured with capacity of 209.98GB
c1t50050768013036A1d3: configured with capacity of 209.98GB
c1t50050768014036A1d3: configured with capacity of 209.98GB
c1t50050768014036A1d4: configured with capacity of 209.98GB
c1t50050768013036A1d4: configured with capacity of 209.98GB
c1t50050768013036A1d5: configured with capacity of 209.98GB
c1t50050768014036A1d5: configured with capacity of 209.98GB
c1t50050768014036A1d6: configured with capacity of 209.98GB
c1t50050768013036A1d6: configured with capacity of 209.98GB
c1t50050768014036A1d7: configured with capacity of 209.98GB
c1t50050768013036A1d7: configured with capacity of 209.98GB
c1t5005076801403680d0: configured with capacity of 12.00GB
c1t5005076801303680d0: configured with capacity of 12.00GB
c1t5005076801403680d1: configured with capacity of 31.98GB
c1t5005076801303680d1: configured with capacity of 31.98GB
c1t5005076801303680d2: configured with capacity of 209.98GB
c1t5005076801403680d2: configured with capacity of 209.98GB
c1t5005076801303680d3: configured with capacity of 209.98GB
c1t5005076801403680d3: configured with capacity of 209.98GB
c1t5005076801403680d4: configured with capacity of 209.98GB
c1t5005076801303680d4: configured with capacity of 209.98GB
c1t5005076801303680d5: configured with capacity of 209.98GB
c1t5005076801403680d5: configured with capacity of 209.98GB
c1t5005076801403680d6: configured with capacity of 209.98GB
c1t5005076801303680d6: configured with capacity of 209.98GB
c1t5005076801303680d7: configured with capacity of 209.98GB
c1t5005076801403680d7: configured with capacity of 209.98GB
c2t50050768012036A1d0: configured with capacity of 12.00GB
c2t50050768011036A1d0: configured with capacity of 12.00GB
c2t50050768011036A1d1: configured with capacity of 31.98GB
c2t50050768012036A1d1: configured with capacity of 31.98GB
c2t50050768012036A1d2: configured with capacity of 209.98GB
c2t50050768011036A1d2: configured with capacity of 209.98GB
c2t50050768011036A1d3: configured with capacity of 209.98GB
c2t50050768012036A1d3: configured with capacity of 209.98GB
c2t50050768011036A1d4: configured with capacity of 209.98GB
c2t50050768012036A1d4: configured with capacity of 209.98GB
c2t50050768012036A1d5: configured with capacity of 209.98GB
c2t50050768011036A1d5: configured with capacity of 209.98GB
c2t50050768012036A1d6: configured with capacity of 209.98GB
c2t50050768011036A1d6: configured with capacity of 209.98GB
c2t50050768011036A1d7: configured with capacity of 209.98GB
c2t50050768012036A1d7: configured with capacity of 209.98GB
c2t5005076801203680d0: configured with capacity of 12.00GB
c2t5005076801103680d0: configured with capacity of 12.00GB
c2t5005076801203680d1: configured with capacity of 31.98GB
c2t5005076801103680d1: configured with capacity of 31.98GB
c2t5005076801203680d2: configured with capacity of 209.98GB
c2t5005076801103680d2: configured with capacity of 209.98GB
c2t5005076801103680d3: configured with capacity of 209.98GB
c2t5005076801203680d3: configured with capacity of 209.98GB
c2t5005076801203680d4: configured with capacity of 209.98GB
c2t5005076801103680d4: configured with capacity of 209.98GB
c2t5005076801203680d5: configured with capacity of 209.98GB
c2t5005076801103680d5: configured with capacity of 209.98GB
c2t5005076801103680d6: configured with capacity of 209.98GB
c2t5005076801203680d6: configured with capacity of 209.98GB
c2t5005076801103680d7: configured with capacity of 209.98GB
c2t5005076801203680d7: configured with capacity of 209.98GB
vpath1a: configured with capacity of 209.98GB
vpath2h: configured with capacity of 209.98GB con este tuve problemas despues
vpath3a: configured with capacity of 209.98GB
vpath4a: configured with capacity of 209.98GB
vpath5a: configured with capacity of 209.98GB
vpath6a: configured with capacity of 209.98GB
vpath7a: configured with capacity of 31.98GB
vpath8a: configured with capacity of 12.00GB

[coneja] /opt/IBMsdd/bin #
Me encontre con u problema, me creo un vpath2h en lugar de un vpath1a, me di cuenta cuando le hice el metainit , que me puteo, lo solucione borrando y recreando los vpath.

[coneja] /opt/IBMsdd/bin # metainit d301 1 3 vpath1a vpath2h vpath3a
metainit: coneja: vpath2h: No space left on device

[coneja] /opt/IBMsdd/bin #

coneja] /opt/IBMsdd/bin # rmvpath -b -all
Continuing will remove IBMsdd vpath device binding with serial number.
Do you want to continue (y/n)?: y
Continuing will remove all IBMsdd vpath devices.
Do you want to continue (y/n)?: y
Removing all vpath devices...
[coneja] /opt/IBMsdd/bin # cfgvpath
Vpath: Configuring 64 devices (8 disks * 8 slices)
[coneja] /opt/IBMsdd/bin # vpathmkdev
[coneja] /opt/IBMsdd/bin #
Luego, me genero todo ok .
[coneja] /opt/IBMsdd/bin # metainit d301 1 3 vpath1a vpath2a vpath3a
d301: Concat/Stripe is setup
[coneja] /opt/IBMsdd/bin # metastat d301
d301: Concat/Stripe
Size: 1321058304 blocks (629 GB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase Reloc
/dev/dsk/vpath1a 0 No Yes
/dev/dsk/vpath2a 16384 No Yes
/dev/dsk/vpath3a 16384 No Yes

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/vpath1a Yes id1,thdd@n60050768019901b400000000000002ba
/dev/dsk/vpath2a Yes id1,thdd@n60050768019901b400000000000002b8
/dev/dsk/vpath3a Yes id1,thdd@n60050768019901b400000000000002b7
[coneja] /opt/IBMsdd/bin #