I am relatively new to linux and am completely dependent on these tutorials. I bought a server and installed Suse After running Ubuntu on my desktop and laptop, I decided to change the server to run Ubuntu as well I didn't uninstall Suse - just installed over it?? After installing the server based on "The Perfect Ubuntu Server 8. Then to install RAID, I followed the tutorial perfectly I think, but at the end of step 6 after rebooting, I still show sda1 rather than md0.
It looks different than the one in the tutorial but I attribute that to being ubuntu rather than debian. Can anyone shed some light on this for me? Did I miss a step or are there other steps involved because of Ubuntu?
No issues, no problems at all. I had several different partitions, even extended ones, I only had to follow on paper, which partition goes into which numbered array - that's it ;-. I wonder, what's the point in having the swap on a raid1?
Ext2 on boot instead of ext You dont need journaling for your boot partition :. Striping made easy :p. If you suddenly need a lot of swapspace, you can use "swapon" command to swap to memorysticks or whatever you need, unlike fstab fixing, swapon wil get resetted on reboot ;. I spent hours trying to work out not only how to set up a software RAID, but also how to do it on a boot partition. I didn't even come close to looking at a live system.
Thank you! You might like to put a link somewhere in this howto to your newer howto detailing the install with Grub2. I spent some time following this howto and tripping up on Grub2 and doing lots of googling, before finally realising that what I thought were google hits on your existing howto were actually pointing to a separate but very similarly named howto, that covers Grub2!
I did manage to lose all my existing data following this. I was not doing this with a root partition so I had no issues with partitions being in use and I specified both disks in the create command rather than the "missing" placeholder - maybe that was my problem.
One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data.
The device that receives the parity block is rotated so that each device has a balanced amount of parity information. As you can see above, we have three disks without a filesystem, each G in size. To create a RAID 5 array with these components, pass them in to the mdadm --create command.
The mdadm tool will start to configure the array it actually uses the recovery process to build the array for performance reasons.
The second highlighted line shows the progress on the build. Warning: Due to the way that mdadm builds RAID 5 arrays, while the array is still building, the number of spares in the array will be inaccurately reported. If you update the configuration file while the array is still building, the system will have incorrect information about the array state and will be unable to assemble it automatically at boot with the correct name.
As mentioned above, before you adjust the configuration, check again to make sure the array has finished assembling. Completing this step before the array is built will prevent the system from assembling the array correctly on reboot:. The output above shows that the rebuild is complete. Now, we can automatically scan the active array and append the file by typing:. The RAID 6 array type is implemented by striping data across the available devices.
Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data.
The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.
As you can see above, we have four disks without a filesystem, each G in size. To create a RAID 6 array with these components, pass them in to the mdadm --create command. We can automatically scan the active array and append the file by typing:. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space.
The mdadm utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. We will be using the mdadm RAID 10 here. The possible layouts that dictate how each data block is stored are:.
You can also find this man page online here. To create a RAID 10 array with these components, pass them in to the mdadm --create command. The layouts are n for near, f for far, and o for offset.
The number of copies to store is appended afterwards. For instance, to create an array that has 3 copies in the offset layout, the command would look like this:. The second highlighted area shows the layout that was used for this example 2 copies in the near configuration.
The third highlighted area shows the progress on the build. Thanks, Rob. Post by vlad59 » All you say can be done with the installer. The installer is pretty easy to understand. Pinky and the brain forever. Post by robbo » Thanks for the reply. One question. Post by robbo » Right, got it pretty much got it sussed. You need to create an array then mount the newly create array as a device and create a file system on it, something I was not doing correctly
0コメント