Creando particiones RAID-5

# ls -lisah /dev/md*

# mknod /dev/md8 b 9 8
# mknod /dev/md9 b 9 9

–> Aqui utilizo directamente los dispositivos sin crear particiones, no es lo habitual

# mdadm -C /dev/md9 --level=raid5 --raid-devices=3  --spare-devices=0 /dev/sdd /dev/sde /dev/sdf

Continue creating array? y
mdadm: array /dev/md9 started.

# mdadm -D /dev/md9

/dev/md9:
Version : 00.90.03
Creation Time : Sat May 17 15:34:00 2008
Raid Level : raid5
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Device Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 9
Persistence : Superblock is persistent

Update Time : Sat May 17 15:34:00 2008
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 1% complete

UUID : 9b69078f:69de7b38:fc8b165e:ee02125a
Events : 0.1

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 64 1 active sync /dev/sde
3 8 80 2 spare rebuilding /dev/sdf

# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md9 : active raid5 sdf[3] sde[1] sdd1[0]
1953519872 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[>………………..] recovery = 1.2% (12067968/976759936) finish=165.5min speed=97127K/sec

#mkfs.ext3 /dev/md9

  Otra característica de mdadm es la habilidad de forzar a un dispositivo (bien sea un miembro de una formación RAID o una ruta en una configuración multipath) a que sea eliminado de una configuración operativa. En el ejemplo siguiente, /dev/sda1 es marcado como defectuoso, luego eliminado y finalmente vuelto a añadir en la configuración. Para una configuración multipath, estas acciones no afectarán ninguna actividad de E/S que esté ocurriendo en ese momento:

# mdadm /dev/md0 -f /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md9
# mdadm /dev/md0 -r /dev/sdd
mdadm: hot removed /dev/sdd
# mdadm /dev/md0 -a /dev/sdd
mdadm: hot added /dev/sdd

Si queremos parar un dispositivo entero podemos hacer: mdadm –manage –stop /dev/md9

Una recomendación importante, cuando lo montes, antes de formatear deja que sincronize, tardara mucho menos, del ordén de 100 veces menos. En cuaqluier caso no tiene ningún impacto sino tienes que reiniciar.

 

 

http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-raid-config.html

http://gentoo-wiki.com/Talk:HOWTO_Gentoo_Install_on_Software_RAID

 http://web.mit.edu/AFS/athena/project/rhel-doc/4/RH-DOCS/rhel-ig-s390-multi-es-4/s1-s390info-raid.html

http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html

This entry was posted in linux and tagged , , . Bookmark the permalink.

2 Responses to Creando particiones RAID-5

  1. Si pierdes dispositivos

    mdadm –examine /dev/particion /dev/particion –scan >> /etc/mdadm.conf

    jacerlo de todas las particiones de golpe, para que reconozca el tipo de raid etc, ya que forman un conjunto.

    mdadm -A /dev/mdX

    Y esto ultimo para que active lo que a pillado en el paso previo.

  2. [root@hn15 ~]# mdadm –create /dev/md1 –level=raid6 –raid-devices=8 –spare-devices=0 sd[a-h]2
    mdadm: You haven’t given enough devices (real or missing) to create this array
    [root@hn15 ~]# mdadm –create /dev/md1 –level=raid6 –raid-devices=8 –spare-devices=0 /dev/sd[a-h]2
    mdadm: /dev/sda2 appears to contain an ext2fs file system
    size=141277944K mtime=Thu Dec 26 17:08:48 2030
    mdadm: /dev/sda2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdb2 appears to contain an ext2fs file system
    size=-1438402048K mtime=Tue Mar 10 11:35:12 2009
    mdadm: /dev/sdb2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdc2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdd2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sde2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdf2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdg2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    mdadm: /dev/sdh2 appears to contain an ext2fs file system
    size=-452740604K mtime=Tue Feb 18 07:41:03 1975
    mdadm: /dev/sdh2 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Tue Feb 17 16:45:46 2009
    Continue creating array? y
    mdadm: array /dev/md1 started.
    [root@hn15 ~]# cat /proc/mdstat
    Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
    md2 : active raid0 sdh1[3] sdg1[2] sdf1[1] sde1[0]
    49158144 blocks 256k chunks

    md1 : active raid6 sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
    2856565248 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
    [>………………..] resync = 0.0% (15944/476094208) finish=1487.7min speed=5314K/sec

    md0 : active raid1 sdb1[1] sda1[0]
    12289600 blocks [3/2] [UU_]

    unused devices:
    [root@hn15 ~]# mkfs.
    mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.msdos mkfs.vfat
    [root@hn15 ~]# mkfs.ext3 /dev/md1 -L vz
    mke2fs 1.39 (29-May-2006)
    Filesystem label=vz
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    357072896 inodes, 714141312 blocks
    35707065 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    21794 block groups
    32768 blocks per group, 32768 fragments per group
    16384 inodes per group
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000, 550731776, 644972544

    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information:

Leave a Reply

Your email address will not be published. Required fields are marked *