Activity log for bug #714941

Date Who What changed Old value New value Message
2011-02-08 01:38:13 Jason Gerard DeRose bug added bug
2011-02-08 01:43:53 Jason Gerard DeRose description A dmedia FileStore (see dmedia/filestore.py) is a kind of layered filesystem in that it takes files and puts them in a special layout (special filenames) according to file content hash. Each FileStore has a corresponding document in CouchDB, something like this: >>> doc = { ... '_id': 'NZXXMYLDOV2F6ZTUO5PWM5DX', ... 'type': 'dmedia/file', ... 'time': 1234567890, ... 'plugin': 'filestore', ... 'copies': 1, ... } Similar to lp:692449, we want to use udisk to get information about the physical storage device and specific partition where the FileStore is located. We'll be getting the information from org.freedesktop.UDisks.Device, documented here: http://hal.freedesktop.org/docs/udisks/Device.html As far as the physical device, we definitely want: DeviceSize DriveSerial DriveWwn DriveVendor DriveModel DriveRevision As far as the partition, we definitely want: PartitionSize PartitionLabel PartitionUuid file system type (ext4, whatever) By default, we will assume the physical device gives us a {'copies': 1} level of durability. If the system is using BIOS fakeraid, I'm not sure we actually gather the needed information to determine the durability of the storage array. The user could always manually updated the copies value. If the partition happens to reside on an mdadm array (quite likely as Novacut is targeting pro video on Ubuntu), then we can gather the information needed to determine durability. I have a hunch this will be controversial, but here's the durability we're going to assign for each mdadm raid level: raid0 - {'copies': 0} raid1 - {'copies': 2} raid5 - {'copies': 1} raid6 - {'copies': 1} raid10 n2/f2/o2 - {'copies': 2} raid10 n3/f3/o3 - {'copies': 3} etc.. The controversy being that raid5 and raid6 only have {'copies': 1}. For people that feel this is unfair, I would suggest you read: http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt Also, perhaps this will help: Q: But raid1 and raid10 are too expensive, take twice as many drives! A: Fine, then don't use raid at all... raid5 and raid6 are more expensive than LVM, save yourself some money! Q: But I want the piece of mind that comes with knowing my data is more durably stored! A: Then you have to pay for it... you don't get better durability without more copies. Q: Then what's the point of raid5/raid6 anyway? A: No idea. A dmedia FileStore (see dmedia/filestore.py) is a kind of layered filesystem in that it takes files and puts them in a special layout (special filenames) according to file content hash. Each FileStore has a corresponding document in CouchDB, something like this:     >>> doc = {     ... '_id': 'NZXXMYLDOV2F6ZTUO5PWM5DX',     ... 'type': 'dmedia/file',     ... 'time': 1234567890,     ... 'plugin': 'filestore',     ... 'copies': 1,     ... } Similar to lp:692449, we want to use udisk to get information about the physical storage device and specific partition where the FileStore is located. We'll be getting the information from org.freedesktop.UDisks.Device, documented here:   http://hal.freedesktop.org/docs/udisks/Device.html As far as the physical device, we definitely want: DeviceSize DriveSerial DriveWwn DriveVendor DriveModel DriveRevision As far as the partition, we definitely want: PartitionSize PartitionLabel PartitionUuid file system type (ext4, whatever) By default, we will assume the physical device gives us a {'copies': 1} level of durability. If the system is using BIOS fakeraid, I'm not sure we can actually gather the needed information to determine the durability of the storage array. The user could always manually updated the copies value. If the partition happens to reside on an mdadm array (quite likely as Novacut is targeting pro video on Ubuntu), then we can gather the information needed to determine durability. I have a hunch this will be controversial, but here's the durability we're going to assign for each mdadm raid level: raid0 - {'copies': 0} raid1 - {'copies': 2} raid5 - {'copies': 1} raid6 - {'copies': 1} raid10 n2/f2/o2 - {'copies': 2} raid10 n3/f3/o3 - {'copies': 3} etc.. The controversy being that raid5 and raid6 only have {'copies': 1}. For people that feel this is unfair, I would suggest you read:   http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt Also, perhaps this will help: Q: But raid1 and raid10 are too expensive, take twice as many drives! A: Fine, then don't use raid at all... raid5 and raid6 are more expensive than LVM, save yourself some money! Q: But I want the piece of mind that comes with knowing my data is more durably stored! A: Then you have to pay for it... you don't get better durability without more copies. Q: Then what's the point of raid5/raid6 anyway? A: No idea.
2011-02-23 09:00:42 Jason Gerard DeRose dmedia: milestone 0.4 0.5
2011-03-31 09:57:29 Jason Gerard DeRose dmedia: milestone 0.5 0.6
2011-04-28 22:29:50 Jason Gerard DeRose dmedia: milestone 0.6 0.7
2011-05-26 14:46:26 Jason Gerard DeRose dmedia: milestone 0.7 0.8
2011-07-18 22:40:20 David Green dmedia: assignee David Green (david4dev)
2011-07-19 22:13:00 David Green dmedia: status Triaged In Progress
2011-07-23 20:59:09 David Green branch linked lp:~david4dev/dmedia/udisks
2011-08-24 00:44:29 Jason Gerard DeRose dmedia: milestone 11.08 11.09
2011-08-25 23:17:14 Jason Gerard DeRose dmedia: status In Progress Fix Committed
2011-08-25 23:17:24 Launchpad Janitor branch linked lp:dmedia
2011-09-27 23:45:21 Jason Gerard DeRose dmedia: status Fix Committed Fix Released