Design Considerations for Developing Disk File System

VOL. 2, NO. 12, December 2011 ISSN 2079-8407 Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights res...
Author: Phebe Baldwin
1 downloads 0 Views 238KB Size
VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 



Design Considerations for Developing Disk File System 1 1, 2

Wasim A. Bhat , 2 S. M. K. Quadri

Department of Computer Sciences, University of Kashmir, India {1 [email protected], 2 [email protected] }

ABSTRACT File system design has never been a straight forward task. Moreover, designing and developing a disk file system is a complex case of file system development. From time to time, since the inception of first magnetic disk in 1956, many disk file systems were drafted and implemented to fit the need of users and/or to cope up with the change in hardware technology. This has resulted into many objective specific disk file systems and hence, no generalized design guidelines or criteria have been developed. In this paper, we take some historical facts and current trends in digital world as a base to figure out 3 basic design parameters for designing and developing a disk file system which are affected by the change in hardware technology and user requirements. In each identified design parameter, we give a brief introduction of some novel approach to mitigate the parameter. Furthermore, we also introduce a new file system benchmarking technique to overcome problems found in existing techniques. The goal of this paper is to organize the design considerations for developing a disk file system and thus, help a file system designer to efficiently design and develop a new file system from scratch or refine or fine tune existing ones. Keywords: Disk File System, Scalability, Performance, Extensibility, Benchmarking

1. INTRODUCTION Earlier File Systems did not have name; they were considered an implicit part of an operating system kernel. The first file system to have a name was DECTape named after the company that made it. In 1973, UNIX Time Sharing System V5 named its file system V5FS while at the same time CP/M file system was simply called CP/M. The nomenclature continued until 1981 when MS-DOS introduced the FAT12 file system. Since then, file systems got their much needed and deserved identity. File system is an important part of an operating system as it provides a way by which information can be stored, navigated, processed and retrieved in form of files and directories from the storage subsystem. It is generally a kernel module which consists of algorithms to maintain the logical data structures residing on the storage subsystem. The basic key functions that every file system provide include operations like file creation, file reading, file writing, file deletion and so on. Apart from these basic functions some file systems also provide additional functions like transparent compression and encryption, and features like alternate data streams, journaling, versioning and so on. Keeping all the hardware parameters and workload constant, the performance of a magnetic disk all depends upon the type of file system used. As such, file systems dictate the performance of a system as magnetic disk performance is the limiting factor of the system performance. Hence, file system design and development is a crucial task with disk file system

design being complex due to mechanical nature of magnetic disk. In general, file systems were designed in an incremental fashion by the individual efforts of researchers and software industry. The factors that primarily were responsible for this incremental development were change in hardware technology and user requirements. In fact, in current world these factors still dictate the design and will continue to. Although, from to time, file system designs were modified to accommodate these changes, unfortunately no general design criteria were developed to mitigate such challenges. As an example, the basic design of Windows FAT file system was modified to support large files, large number of files and large volumes which has resulted into scalable variants of same design in the form of FAT12, FAT16, FAT32 and now exFAT. exFAT (unofficially called FAT64) also supports some new features like journaling and so on. Similarly, old UNIX file system was modified to increase its performance which resulted into superior performance variants of same design in the form of FFS (Fast File System), C-FFS (Co-Locating FFS) and so on. Furthermore, Linux Extended file systems were modified to be more scalable and support advanced features like journaling which resulted into ext2, ext3 and ext4 file systems. In general, all popular and influential disk file system designs have been modified from time to time since their inception to mitigate the challenges put forth by change in hardware technology and/or user requirements. Although, this incremental development of basic design of file systems has successfully solved problems

733

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

but has unknowingly created other problems which include objective specific disk file systems, incompatible variants of same basic design and lack of generalized criteria to design and develop a disk file system from scratch or refine and fine tune existing ones. In this paper, we take some historical facts and current trends in digital world as a base to figure out 3 basic design parameters for designing and developing a disk file system which are affected by the change in hardware technology and user requirements. In each identified design parameter, we give a brief introduction of some novel approach to mitigate the parameter. Furthermore, we also propose a new approach for file system benchmarking. The goal of this paper is to organize the design considerations for developing a disk file system and thus, help a file system designer to efficiently design and develop a new file system from scratch or refine or fine tune existing ones. The rest of this paper is organized as follows; section 2 presents the historical facts and trends in digital world and uses the same to identify 3 basic file system design parameters. Section 3, 4 and 5 discuss the need and current approaches for mitigating the respective parameters. Further, in each parameter we briefly introduce a novel approach. In section 6, we discuss the problems with current file system benchmarking science and briefly introduce a novel approach. Finally, section 7 presents the conclusion.

2

DESIGN IDENTIFICATION

TABLE 1: HISTORY OF FILE SYSTEM DESIGN & DEVELOPMENT

PARAMETER

In this section we use some historical facts and trends in digital world to identify basic design parameters for designing and developing a disk file system. Table 1 summarizes the list of some most influential and popular disk file systems and their design variants along with their year of inception and challenges mitigated.

This historical fact is the basis of our argument that file system designs have been modified from time to time to accommodate some challenges. Furthermore, the reason behind these challenges is either hardware technology change or user requirement change or both. We can categorize these challenges into 3 broad categories along with their consequences on file system design as follows: 1.

As the digital technologies became affordable, they penetrated deep into every aspect of our day to day life. As such, the digital data creation, growth and proliferation increased accordingly. This led to change in file system design to accommodate large files, large number of files, large volumes and so on.

2.

As solid state electronics advanced, microprocessors, primary memory, bus architectures, networks and other such components became faster. The only component that lagged was magnetic disk drive due to being mechanical at

734

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

heart. This led to the change in file system design to increase the performance of common file system operations. 3.

Due to widespread usage of computers the significance of digital data increased. This led to change in file system design to increase the security, reliability and usability of file system operations in general and data specifically.

Now, it can be safely argued that all these 3 categories directly point to 3 file system design parameters viz., File System Scalability, File System Performance and File System Extensibility.

3. FILE SYSTEM SCALABILITY File system scalability is defined as the ability of a file system to support very large file systems, large files, large directories and large numbers of files while still providing I/O performance. During recent years, digital technologies have penetrated into every aspect of our day to day life. This intrusion has led to growth of voluminous amount of digital data [1]. This trend in growth of digital data which includes both the amount of digital data and individual size of digital data objects, affects the scalability of file systems. As such, in future there is a possibility that current highly scalable file systems will not be able to cope up with this digital data growth [2]. Even in present scenario the most compatible and frequently used FAT file system for removable storage devices is not scalable with current digital data trend. This problem has been with FAT file system since its inception in 1982. Although from time to time the problem has been mitigated in the form of FAT16, FAT32 and now exFAT (unofficially called FAT64); but this has not completely solved the problem. Furthermore, this has created several incompatible versions of same basic design. The only difference that makes one FAT flavor more scalable than other is a bit length that identifies size of a cluster and another bit length that uniquely identifies all clusters of FAT volume [3]. As such these 2 bit lengths which are stored as a part of on-disk data structure, set upper limit on the volume size and file size in general. Moreover, this problem is not inherent to FAT file systems only rather every file system suffers from it. As an example, ext2 i-node contains a table of 32-bit pointers which point to data blocks while another field contains a bit value that identifies the block size [4]. The obvious solution to this kind of a problem is to change these bit lengths to higher values. This solution will work as it has worked in past but has certain limitations. First, the solution is not complete as it merely increases the limit and is thus valid for a limited period of time i.e. till the digital data object size changes. Second, it demands understanding the design of file system in question, modification of source

code to widen the bit lengths and redistribution of modification. This means most of the time is wasted in understanding the design and source code of file system to do a little source code modification. Worst, this is to be repeated for every file system which suffers from scalability problem. Furthermore, the solution is only valid if we have source code at our disposal. Third, as common operating systems have monolithic kernel design, the file system is integral part of kernel; this solution demands modification and recompilation of kernel level source code. This means, the stability and reliability of kernel in general and file system in specific is compromised as kernel level programs take years to become stable. Finally, this solution creates incompatible versions of same type of file systems just to support large volumes, large file sizes and so on. To overcome this scalability problem, proposals like ZFS which integrate LVM and file systems are very promising but unfortunately do not solve scalability problem of other file systems. In 2004, Wright and Zadok [5] developed Unionfs. Unionfs is a stackable file system that allows users to specify a series of directories which are presented to users as one virtual directory. Although, Unionfs was developed for namespace unification but unintentionally does present a virtual file system that is scalable to store and process large number of files and directories. As Unionfs can merge directories from different file systems into one virtual directory, it can be thought of a virtual file system that is scalable to large volume sizes. mhddfs [6] is another file system developed by Oboukhov that allows to unite several mount points (or directories) to the single one. As such, this makes it possible to combine several hard drives or network file systems and hence one big file system is simulated. This file system is like Unionfs but it can choose a drive with the most of free space, and move the data between drives transparently for the applications unlike Unionfs. If an overflow arises while writing to the some file system then the file content already written will be transferred to another file system containing enough of free space for a file. The transferring is processed on-the-fly, fully transparent for the application that is writing. Based on the mentality of Unionfs and mhddfs, we propose suvFS which is a scalable user space virtual file system which can be mounted on top of any existing file system to extend its capability to handle large files with logically no upper limit on file size. suvFS works by splitting a large file (which can’t be created, stored or processsed in its entirety) into number of legitimate sized files. The splitting is transparent to user applications. suvFS does not allow individual access to these fragments of a large file (not even list them) and simulates a single virtual large file for each set of related physical file fragments. Furthermore, all file system operations are supported by this virtual large file and the consequences of operations are reflected in associated file fragments. The approach followed by suvFS has several benefits; 1) Large file size scalability can be added to any file system, 2) No code modification of OS or even file system in

735

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

question is required, and 3) No kernel stability and reliability is compromised as file system extension is added at user level using FUSE framework. We evaluated the performance of suvFS when mounted on top of FAT32 file system. The results indicate that suvFS adds no significant performance overhead to FAT32 file system for sequentially reading large files. However, the performance of FAT32 file system deteriotes while sequentially writing large files but the hit is largely due the FUSE framework used.

4. FILE SYSTEM PERFORMANCE Magnetic disk technology has improved since its inception in 1956, but the improvements have been primarily in the areas of cost and capacity rather than performance. In spite of the improvements in capacity and cost, disk drives at heart are mechanical devices and as such face performance problems. Mechanical devices can not improve as quickly as solid state devices. As an example, the CPU performance has increased 16,800 times between 1998 and 2008, but disk performance has increased by just 11 times [7]. Thus, the challenge in building a disk file system having high performance is in using the magnetic disk efficiently. The performance of modern magnetic disk drive boils down to 2 parameters: Rotational Latency and Seek Time. A high performance disk file system design should try to minimize the rotational latency and avoid long seek distances. Furthermore, the conventional file system design (which delineates metadata and user data within the file system) and the diversity in workload, also hinders the performance of magnetic disk. As such, in addition to minimizing rotational latency and long seeks, file systems in general should employ asymptotically superior data structures and algorithms to store, retrieve and process data within file systems. Two file system designs, FFS and LFS, are contemporary examples of high performance file system designs developed around this mentality. Although, FFS and LFS take different approaches but both mitigate this challenge by putting metadata and user data pertaining to a file adjacent to each other on disk. Unfortunately, because the performance gap between magnetic disks and solid state electronic devices is widening by every passing second, the FFS and LFS are not able to cope up with it. Also, file system research community has proposed many performance patches to existing file system designs. Data duplication tries to reduce rotational delay and seek time by duplicating data blocks at various locations within the volume. Similarly, Data relocation takes into consideration the file access patterns and accordingly relocates the data. Moreover, hybrid designs [8] exploit the strengths of 2 or more file system designs to use disk efficiently. Unfortunately, all these proposals still suffer from technology gap between solid state devices and magnetic disk. To overcome this problem, researchers have proposed using hybrid storage for a single file system. The

idea is to exploit the negligible latency and seek in solid state storage devices for storing and retrieving small amount of metadata (which is most frequently accessed [9]) within the file system while high bandwidth and low cost of magnetic disks for large amount of user data within the file system. Many proposals based on this approach have surfaced during last decade which include using battery backed up RAM, MRAM, NVRAM and so on for metadata storage and retrieval while magnetic disks for user data [10]. The problem with these approaches is that they either demand hardware upgrade or replacement. Also, the solution is not portable as the migration requires same hardware configuration on destination. Furthermore, significant amount of source code modification is required in each file system. We propose using USB based solid state storage device for metadata and magnetic disk for user data so that the latency and seek required to access metadata is negligible while the bandwidth of storing and accessing user data is high along with low cost per unit capacity. As such, first no hardware upgrade or replacement is required as USB flash drives and USB interfaces in PCs are common. Second, the solution is portable; all that is needed at destination is a USB interface. Third, a minimum amount of source code modification is required as all that is needed is identification of metadata and its redirection to USB flash drive. We have designed and simulated a Hybrid And Largely Fast FAT file system called halfFAT. halfFAT exploits the delineation of metadata and user data policy of FAT file systems to store and retrieve metadata from USB flash drive and user data from magnetic drive. Our simulation results indicate that the performance of FAT file systems is largely increased by this approach.

5. FILE SYSTEM EXTENSIBILITY Extending file system functionality in an incremental manner is valuable but the enhancement patches should not be incorporated into the file system code; rather they should be stacked or layered on top of the existing file system as a module. As such, first the reliability and stability of existing file system is not compromised. Second, the layered or stacked file system can be used as an enhancement for multiple file systems without the requirement of understanding the design and code of each individual file system. Third, the modular approach increases the ease of debugging by reducing the domain of bug induction. Finally, incremental development makes it possible for third party developers to release a file system improvement without deploying a whole file system from scratch. As such, improvements can be made available to proprietary file systems whose source code is not available [11]. File System Extensions developed using Layering has been used for various purposes including Monitoring, Data Transforming, Size Changing, Operation Transformation, Fan-Out File Systems, and so on. Due to

736

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

wide spread usage of computer systems, the significance of digital data security has become the focus of current computer science research. After deletion data recovery in file systems is trivial and can be performed by novice hackers. As file system is the lowest level of source and sink of information, this security breach can be stopped by adding secure deletion extensions to existing file systems. When a file is deleted or Trash Bin is emptied, the end user thinks that the files have been permanently removed. Operating systems give an illusion of file deletion by just invalidating the filename and stripping it of allocated data blocks. As such, the contents of data blocks associated with a file remain there even after its deletion, unless and until these blocks get reallocated to some other file and finally get overwritten with new data [12]. Unfortunately, this time gap also allows malicious users and hackers to recover deleted files. Also, laptops and portable storage devices can be discarded, lost or stolen. The sensitive and confidential information, which was deleted with a belief that the information has been physically erased, can be recovered even by novice users. Due to excessive use of digital content in our day to day life, most users do not even know that their disk contains confidential information in the form of deleted files; worse, the users who know, ignore the fact. There are generally two methods for secure deletion of data; 1) overwrite the data, and 2) encrypt the data. Secure deletion using encryption can employ various encryption techniques to encrypt data before it is stored on disk and to decrypt it on its retrieval. This solution protects both deleted as well as non-deleted data. However, it suffers from several problems and is not feasible. Secure deletion using overwriting works by overwriting the metadata and data pertaining to a file when it is deleted. The most applicable and desired data overwriting level is transparent per-file overwriting at file system level. At this level, all the file system operations required for data overwriting can be intercepted and thus overwriting can be performed reliably. Many research proposals have attempted to add secure data deletion to file systems. In 2001, Bauer et. al. [13] modified EXT2 file system to asynchronously overwrite data on unlink and truncate operations. This method has some drawbacks; like source code modification can break stability and reliability of file system, modification should be made in every file system and the purging cannot sustain across crashes. In 2005, automatic instrumentation of source code using FiST to add purging support was demonstrated by Zadok et. al. to save the manual work of source code modification [14]. In case, the source code was not available the purging extension, called purgefs, instruments a null pass v-node stackable file system, called base0fs, to add purging extension as a stackable file system. In asynchronous mode, purgefs can remap the data pages to a temporary file and overwrite them using a kernel thread. purgefs also suffers from reliability problem as purging can’t sustain across system crashes. In 2006, Zadok et. al. proposed another FiST extension called FoSgen [15] which is similar to purgefs in

instrumentation, i.e., if source code of file system to be instrumented is not available, the FiST creates a stackable file system. However, FoSgen differs from purgefs in operation as it moves the files to be deleted or truncated to a special directory called ForSecureDeletion and invokes the user mode shred tool that overwrites the files. In case of truncation, FoSgen creates a new file with same name as original file and may need to copy a portion of the original file to the new one. Due to Trash Bin like functionality in FoSgen, the purging sustains across system crashes but also increases the window of insecurity as it provides a clean file system interface for data recovery. purgefs & FoSgen overwrite at file level and are not able to exploit the behavior of file systems at block level for efficiency. Although, [13] does work at block level but it makes no effort to exploit it. We propose a reliable and efficient stackable file system, called restFS, for secure deletion of data. restFS exploits the behavior of file systems at block level to achieve reliability and efficiency which is missing and is not possible in existing secure data deletion techniques that work at file level. restFS is motivated by the possibility of reducing the number of disk writes issued if all the individual overwrites to consecutive fragments of different files to be purged are merged as a single overwrite. Even in case of two or more non-fragmented files to be purged whose content are placed next to each other on disk, the overwriting can be efficient if the individual overwrites are merged as a single overwrite. restFS also considers the possibility that a file system under heavy workload has a good probability that de-allocated blocks may be allocated again to some other file and finally may get overwritten with new data. restFS is implemented as a v-node stackable file system that can be mounted between EXT2 file system and VFS to enhance the capability of EXT2 file system to perform reliable and efficient secure deletion of data. Although, the implementation of restFS is very specific to EXT2 file system, but it presents a novel approach that can be implemented for all file systems that export the block allocation map of a file to upper layers without modifying the source code. We evaluated restFS using Postmark and results indicate that that restFS can save block overwrites between 28-98% in ext2 file system. In addition to this, it can reduce the number of write commands issued to disk by 88%.

6. FILE SYSTEM BENCHMARKING Whenever a new software or hardware is designed and developed, the first thing people are interested in is its performance. The performance data of such a software or hardware has a significant impact on its value. Benchmarks are used to get such performance data of software or hardware and thus, may add to or subtract from the value of that software or hardware. File systems are complex in design and operation which makes file system benchmarks more complex to design. Although, every file system has a single motive; mitigate the access to data in secondary storage devices via a uniform notion of files but they differ in many

737

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

ways, such as type of underlying media, storage environment, the workloads for which the system is optimized, and in their features. In addition to this, complex interactions exist between file systems, I/O devices, specialized caches (e.g., buffer cache, disk cache), kernel daemons (e.g., kflushd in Linux), and other OS components [16]. Current file system benchmarking science benchmark a practical file system and compare the performance data so gathered with the performance data of some other practical file system. There are many problems related to this mentality of benchmarks. First, the motive of benchmarking to unveil a better file system or pinpoint the areas for improvement is drifted to winning an argument. As benchmarks can make any file system look good or bad, there is a large quantity of benchmarks but no standard benchmark. Comparing results from different papers becomes difficult due to lack of standardization. Further, the results of a multi-dimensional problem are presented as a scalar quantity. Second, every practical file system has good and bad corners; some are good at handling small files while some are good at large files and so on [17]. As such imperfect file systems are compared against each other. This can result in confusing conclusions for same workload. For example, for two file systems F1 and F2, for the same intended workload, F1 may outperform F2 in some metric while F2 may outperform F1 in some other metric. As such, this makes result interpretation and decision making a difficult task. Third, this way we are not able to decide how much actual improvement in some aspect of file system should be made in a badly performing file system as the better performing one sets a higher limit to the performance. Fourth, due to this mentality the results of benchmarks are not portable; means the figures change if the configuration of hardware is changed (say 5400 rpm disk is replaced by 7200 rpm) although the relation between the results may remain same. Moreover, if a third file system is put to comparison, it is to be benchmarked using same configuration. Finally, current benchmarks do not pay much attention to the file system metadata and user data design policy which dictate the disk layout of file system and complexity of algorithms. As such, this makes the areas of design that need improvement or have scope for improvement less visible. To deal with these problems we propose benchmarking of a practical file system against a hypothetical file system which outperforms every practical file system of its class in every metric. This way no confusing conclusions and difficult result interpretation is possible. Further, to actually compare the performance data of practical file system with hypothetical file system, hypothetical file system operations are modeled as raw disk I/O operations and benchmarked. This change in mentality of benchmarks eradicates the inherent problems of current benchmarking science. First, the practical file system will always lag behind the hypothetical file system in every metric and hence no confusing conclusion. Second, the upper limit to the improvement that should be made in practical file system is

set by hypothetical file system, a better design. Third, results can be presented and interpreted in figures signifying how much the design is close to or far away from the better design in all aspects of metadata and user data design policy. As such, this can point to specific areas of practical file system’s design which need improvement or have scope for improvement. Fourth, the portability of results can be achieved as hypothetical file system operations are modeled as raw disk I/O operations. We have developed a hypothetical file system, called OneSec for benchmarking practical file systems for small file accesses. We benchmarked ext2 file system against OneSec using a simple micro benchmark. The results indicate that there is still a chance of improvement in metadata design policy of ext2 file system as it is numerically far away from OneSec. Also, we found that user data policy of ext2 file system is much nearer to OneSec when handling large chunks rather than small chunks of user data.

7. CONCLUSION In this paper we argued that the file system design has been influenced and still is, by the change in hardware technology and user requirements since the existence of first file system. We argued that this has resulted into many objective specific file systems and hence no generalized design considerations have been developed. To support our argument and to identify these basic design parameters we used some historical facts and trends in digital world. Furthermore, we identified 3 file system design parameters; Scalability, Performance and Extensibility. In Scalability parameter, we proposed adding large file size scalability to all file systems without modifying the source code. In Performance parameter, we proposed using hybrid storage for metadata and user data of a file system. In Extensibility parameter, we proposed a reliable and efficient secure data deletion extension. Finally, we introduced a new benchmarking science for file systems. This paper is meant to organize the considerations for developing a disk file system and thus, help a file system designer to efficiently design and develop a new file system from scratch or refine or fine tune existing ones.

ACKNOWLEDGMENT The authors wish to thank fellow researchers in Department of Computer Sciences, UoK for their support and suggestions.

REFERENCES [1] F. Moore, “Storage Facts, Figures, Best Practices, and Estimates,” Horison Information Strategies, September 2009. [2] W. A. Bhat and S. M. K. Quadri, “Efficient Handling of Large Storage: A Comparative Study of Some Disk File

738

VOL. 2, NO. 12, December 2011

ISSN 2079-8407

Journal of Emerging Trends in Computing and Information Sciences ©2009-2011 CIS Journal. All rights reserved. http://www.cisjournal.org 

Systems,” In Proceedings of 5th National Conference on Computing for Nation Development, pp. 475-480, March 2011. [3] W. A. Bhat and S. M. K. Quadri, “Review of FAT data structure of FAT32 file system,” Oriental Journal of Computer Science & Technology, vol. 3 no. 1, June 2010.

[12] S. M. K. Quadri and W. A. Bhat, “A Brief Summary of File System Forensic Techniques,” In Proceedings of 5th National Conference on Computing for Nation Development, pp. 499-502, March 2011. [13] S. Bauer and N. B. Priyantha, “Secure Data Deletion for Linux File Systems,” In Proceedings of the 10th Usenix Security Symposium, pp. 153–164, August 2001.

[4] W. A. Bhat and S. M. K. Quadri, “A Quick Review of On-Disk Layout of Some Popular Disk File Systems,” Global Journal of Computer Science & Technology, vol. 11 no. 6, April 2011.

[14] N. Joukov and E. Zadok, “Adding Secure Deletion to Your Favorite File System,” In Proceedings of the third international IEEE Security In Storage Workshop, December 2005.

[5] C. P. Wright and E. Zadok, “Unionfs: Bringing File Systems Together,” Linux Journal, vol. 2004 no. 128, December 2004.

[15] N. Joukov, H. Papaxenopoulos and E. Zadok, “Secure Deletion Myths, Issues, and Solutions,” In Proceedings of the 2nd ACM Workshop on Storage Security and Survivability, October 2006.

[6] D. E. Oboukhov, mhddfs, http://svn.uvw.ru/mhddfs/, 2008. [7] D. Klein, “History of Digital Storage,” Micron Technology Inc., December 2008. [8] Z. Zhang and K. Ghose, “hFS: A Hybrid File System Prototype for Improving Small File and Metadata Performance,” In Proceedings of EuroSys’07, March, 2007. [9] W. A. Bhat and S. M. K. Quadri, “IO Bound Property: A System Perspective Evaluation & Behaviour Trace of File System,” Global Journal of Computer Science & Technology, vol. 11 no. 5, April 2011. [10] An-I A. Wang, P. Reiher, G. J. Popek and G. H. Kuenning, “Conquest: Better Performance Through a Disk/Persistent-RAM Hybrid File System,” In Proceedings of the USENIX 2002 Annual Technical Conference, pp. 15-28, June 2002. [11] W. A. Bhat and S. M. K. Quadri, “Open Source Code Doesn't Help Always: Case of File System Development,” Presented at National Seminar on Open Source Softwares: Challenges & Opportunities, University of Kashmir, June 2011.

[16] W. A. Bhat and S. M. K. Quadri, “Benchmarking Criteria for File System Benchmarks,” International Journal of Engineering Science & Technology, vol. 3 no. 1, Febuary2011. [17] S. M. K. Quadri and W. A. Bhat, “Choosing Between Windows and Linux File Systems for a Novice User,” In Proceedings of 5th National Conference on Computing for Nation Development, pp. 457-462, March 2011.

BIOGRAPHY Wasim A. Bhat is a Ph. D. candidate in Department of Computer Sciences, University of Kashmir, India. His research interests include Operating Systems; specifically file systems with focus on design considerations. Dr. S. M. K. Quadri is on Computer Science faculty in Department of Computer Sciences, University of Kashmir, India.

739

Suggest Documents