- Dec 18, 2001
- 24,036
- 21
- 81
Many years ago I was working on a project with a few other guys, we were building a new (theoretical) operating system. My specialty was in the file system and GUInterface. Fun fun...
One of the conclusions I came up with was to borrow some of the features of OS2's HPFS. I think it is important that the FS always saves files in a contiguous stream (no fragmentation). Also, instead of linking each block after another, the beginning of the file has a block index.
So a few days ago I was thinking about file systems again... probably because my CISSP study materials mentioned NFS and I've been defragmentting laptops.
I'm now thinking that since my FS uses contiguous files, you don't need to index each block - you just need to know the size of the file. Instead, I would include a parity index, so corrupted blocks could potentially be recovered. Additionally, a checksum of each file would be kept, to indicate any kind of corruption or tampering right away.
My newest idea is how files are treated. Our file systems today have fixed block sizes. What if, the fs had flexible block sizes? I'm visualizing a drive - at the beginning of the drive executable type files are kept, and have very small block sizes. They don't leave much room for change, because they should never be changed often. On the other end, is data type files. The blocks are extremely large, and binary is saved in reverse. So if you have a log file or document, it grows towards the beginning of the fs.
Discuss.
One of the conclusions I came up with was to borrow some of the features of OS2's HPFS. I think it is important that the FS always saves files in a contiguous stream (no fragmentation). Also, instead of linking each block after another, the beginning of the file has a block index.
So a few days ago I was thinking about file systems again... probably because my CISSP study materials mentioned NFS and I've been defragmentting laptops.
I'm now thinking that since my FS uses contiguous files, you don't need to index each block - you just need to know the size of the file. Instead, I would include a parity index, so corrupted blocks could potentially be recovered. Additionally, a checksum of each file would be kept, to indicate any kind of corruption or tampering right away.
My newest idea is how files are treated. Our file systems today have fixed block sizes. What if, the fs had flexible block sizes? I'm visualizing a drive - at the beginning of the drive executable type files are kept, and have very small block sizes. They don't leave much room for change, because they should never be changed often. On the other end, is data type files. The blocks are extremely large, and binary is saved in reverse. So if you have a log file or document, it grows towards the beginning of the fs.
Discuss.