Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

Question: best performance of hard disk reading (Users)

posted by Arjay, 19.07.2010, 16:24
(edited by Arjay on 19.07.2010, 16:40)

> > A disk cache is an area of memory that is set to store the data most
> > recently read from the hard disk.
> > But it does not work with the data that not yet read.

It depends on the design of the cache and what options have been used by the end user; Read ahead cache buffers have been common on PC's for years in various places not only in hardware such as disk controllers, but also within CD-ROM drivers e.g. MSCDEX's /M option, Smartdrive's /B option etc, there is a dedicated 1986 DOS util called DFACC (DOS File ACCelerator) by S.H.Smith which apparently (dfacc10a.zip) does some of what it appears you were asking asked for in a util. Personally however I wouldn't risk using DFACC with a modern DOS as it is far too old (last tested with version 3.1!). Note: the DFACC.DOC (text) incorrectly says the following: "This is the opposite of a "cache" type program. A cache keeps data AFTER it has been used; DFACC gets data BEFORE it is needed."

I say "incorrectly" as the computer dictionary definition of the word "cache" is "To store data in a faster storage system or a storage system closer to the usage of the data." - in other words the key text book definition phrase covers both BEFORE and AFTER uses. It also doesn't state that cache data has to be accessed "immediately", it's just that most people expect cache data to be used immediately but it doesn't have to be.

> > Need a disk cache utility to store to the FAT and directories completely
> > (maybe compressed).
The reason I asked for a little more info to better understand your needs/thinking was because I was thinking that depending on what you were looking at doing you may not even need to bother with FAT caching anyway.

For example if you were asking because you were planning to code up something to be used on older machines and were mostly thinking about caching because you were planning to use a lot of files; then my thinking would be to take a step back and ask yourself if it is possible to eliminate the need to have the files in the first place. In other words rather than looking for a solution to a problem I was looking to see if we could eliminate the problem instead.

I'll give you an example: Years ago I came across an excellent 128 byte program for displaying a 320x200 PCX file (this was after I'd coded a slightly larger program in TP to do the same). So I already knew when I saw this little program that the header for a PCX file was 128 bytes, the same length as this program (obviously done on purpose). Like most people I suspect who knew this I also immediately realized that the code could be easily combined with a 320x200 PCX file to make a self displaying graphics file. However when I stepped back I also realized that since the program included file handling code that if there was no separate file that the file handling code wasn't needed thus a combined file could be smaller. That is to say if there was no file then no file handling was required. I'm sure the author of the 128 byte program realized this hence 128 bytes in size to make that point even though from memory he never actually stated it in his source.

In a similar mindset if FAT is wasteful as we know in terms of size, rather than necessarily worrying about compressing FAT, etc depending on the particular situation if you can avoid the need to use FAT in the first place. The problem with systems like DOS is people tend to get conditioned, e..g. it is natural to think in terms of files, FAT. Which interestingly is exactly what Rugxulo picked up on, this very fact that often in the DOS world we are still thinking about FAT, files etc.

So a program could for example support a parameter of a pointer to a memory data address instead of expecting a file (or both). This doesn't mean this this is always the right option but my point is there are other alternatives.

> > side. I am not have a benchmark to prove it. :-P
Ever read The Story of Mel in relation to drum memory?

> I'm a bit fuzzy on the details since I never looked too closely,
To be honest I am as well. As although I started coding up some FAT stuff some time back I then realized there was other better ways of doing the same things I wanted to do hence I instead spent the time on things like file packaging, hence instead of writing FAT code I wrote DLL handing code instead.

> so maybe I'm thinking of only really small disks. (Obviously not
> FAT32, but if FAT16 is max. 65535 files,
As I understand it with FAT16 it is 512 in a root directory (seen this many times), 65536 max but also depends on the size of the disk.

> would that fit? Arjay? :-) )
:)

> of ext2 or HPFS or whatever. (Linux gets 40+ FSes and we get only one,
> whee! It's a pity they don't help us more. If I weren't so dumb ....)
Don't put yourself down and yes I totally agree it would be good if there were more FS supported under DOS. It is worth remembering that a CD-ROM driver is an example of DOS being extended in terms of more file systems, it's just that most organizations lost interest in extending DOS. It is also worth noting a number of DOS demos and games not only supported file packages (like WAD's) but some did this by patching the Interrupt 21h file handing routines. Indeed I remember hearing of a demo group having a problem when an EXE virus got "inside" their file system after an infect file was packaged in.

 

Complete thread:

Back to the forum
Board view  Mix view
22760 Postings in 2121 Threads, 402 registered users (1 online)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum