I've come across another solution that this would be useful. In the testing of filesystems with compression enabled (zfs, btrfs). Trying to figure out raw write speeds using /dev/zero is meaningless because the data is highly compressed. Using /dev/urandom is too slow (~ 6MB/s). Based on this post from stackoverflow (http://serverfault.com/questions/6440/is-there-an-alternative-to-dev-urandom) I tried:
time openssl rand 1000000000 | head -c 1000000000 > testfile
which worked ok. But being able to use dd with erandom would be ideal for this situation.
I've come across another solution that this would be useful. In the testing of filesystems with compression enabled (zfs, btrfs). Trying to figure out raw write speeds using /dev/zero is meaningless because the data is highly compressed. Using /dev/urandom is too slow (~ 6MB/s). Based on this post from stackoverflow (http:// serverfault. com/questions/ 6440/is- there-an- alternative- to-dev- urandom) I tried:
time openssl rand 1000000000 | head -c 1000000000 > testfile
which worked ok. But being able to use dd with erandom would be ideal for this situation.