samwdpckr
23.03.2024, 20:32
@ tom
|
I made my own DOS implementation |
Anyway, now the owner of the MCB changes when the realloc syscall is made, so modifying the internal structs is not necessary. I don't know if that works in FreeDOS, but because FreeDOS aims to be a 100% MS-DOS compatible operating system, it SHOULD work - if it doesn't, then it is a bug.
ST-DOS is not meant to be a clone of MS-DOS. Instead it is meant to be (mostly) compatible with the documented programming interface, so that it satisfies most already existing DOS programs and makes it easy to write new programs in such way that it works on both ST-DOS and FreeDOS. Many old CP/M syscalls are not supported, but I'll probably write a "compatibility TSR" that implements some syscalls that are currently not implemented in the kernel.
The main purpose of ST-DOS is to be as efficient 16-bit disk operating system as possible, and a software layer that provides basic I/O syscalls for accessing file systems and storage devices in Unix-like manner. If you need an MS-DOS clone, then you should just use FreeDOS. |
samwdpckr
26.03.2024, 04:36
@ samwdpckr
|
I made my own DOS implementation |
New version. No new features this time, and no significant bugfixes either. In the filesystem mounter there was a bug that affected only builds with 64-bit LBA support when the BIOS supported LBA48 or higher. The program left the most significant doubleword of the "starting sector" of a filesystem uninitialized and it caused read errors. The default build of the kernel uses only 32-bit sector indexes anyway.
I did numerous small fixes to make the kernel support 64-bit sector indexing properly, so now it should work with larger than 2 TB hard drives. I cannot test it though.
I did also many micro-optimizations and reordered the arguments of many functions.
I wrote routines that calculate the sector indexes using MMX instructions. I thought that it would make the math with 64-bit integers a lot faster than adding, subtracting and comparing the numbers 16 bits at time in the 8086-compatible general purpose registers. In the end the MMX version turned out to be significantly slower, at least on a 3 GHz Pentium 4. This again shows that using new fancy instruction set extensions does not always make the program run faster, but it can very efficiently ruin compatibility with older hardware. The kernel can be compiled with "-dUSE_MMX" to create a build that uses MMX instructions.
The kernel can also be compiled for different targets than just 8086. I noticed that when it is compiled for i386 or newer CPUs, some programs just stop working. One of those programs is Doom's installer. I don't know why that happens.
The "VER" command of the command prompt now also shows the build target of the kernel. |
samwdpckr
27.03.2024, 23:29
@ samwdpckr
|
I made my own DOS implementation |
My previous message did not make much sense. It turns out that MMX does only 32-bit math and many sources from the internet are wrong about this. I replaced the MMX routines with code that uses the 32-bit i386 registers. The new code is also much faster.
I also tried SSE2 instructions, but interestingly they were not much faster than MMX either.
Trying to replace 64-bit comparisons with hand-written routines seems to result in slow code. The reason is probably that the hand-written routines have to always have their return value in a general-purpose register, whereas the comparisons that the compiler generates natively have their "return value" in the flags register. So although comparing two 64-bit numbers in 16-bit registers means having to do four conditional jumps, it is still faster than a hand-written routine. AFAIK the Watcom compiler does not support having a return value of an actual function in the flags register. But replacing 64-bit additions and substractions with hand-written inline routines did speed things up a lot.
Symlinks can now point to device files. Previously they could only point to actual files.
The cache lookup table was previously a far object, but I moved it to the same segment with the kernel, which resulted in a small performance improvement. |
Rugxulo
Usono, 28.03.2024, 01:29
@ samwdpckr
|
I made my own DOS implementation |
> My previous message did not make much sense. It turns out that MMX does
> only 32-bit math and many sources from the internet are wrong about this. I
> replaced the MMX routines with code that uses the 32-bit i386 registers.
> The new code is also much faster.
Keep in mind that MMX is still using the FPU behind the scenes.
> I also tried SSE2 instructions, but interestingly they were not much faster
> than MMX either.
You're using a 3 Ghz Pentium 4? Very quirky machines. SSE2 was first introduced with the Pentium 4, and is probably preferred for that machine (and default for AMD64). If it's not "faster" on that one machine, keep in mind that Pentium 4 had long pipelines and had no barrel shifter (i.e. shifts are slow) and other quirky things (use add 1 instead of inc). Even SSE2's bandwidth would greatly increase in later cpus, so it's definitely faster nowadays. GCC always had surprisingly good tuning for Pentium 4. Feel free to compare its output. Oh, and FXSAVE/FXRSTOR is faster than the old FPU way of saving things. |
samwdpckr
28.03.2024, 04:15
@ Rugxulo
|
I made my own DOS implementation |
Nowadays many developers want to drop support of CPUs that don't have SSE2 instructions, and they are basically using those new instruction set extensions as a tool to bully people who have an "old" computer that still has more than enough computing power for their use. They are reasoning it by saying that the new instruction set extensions make the program run faster, when the reality is that every new version of the program is always more bloated than the previous one. This culture is especially visible amongst the Rust gang and the developers of many graphical desktop libraries in the FOSS world.
They are purposely writing code that breaks when sizeof(long) is not 64, and are writing makefiles that override the CFLAGS that the user has set, resulting in a binary that has a different CPU target than was intended by the user. This type of behaviour is unacceptable. When the program has zero lines of assembly, there is absolutely no reason to artificially limit its portability by preventing building it to certain CPU targets.
And my example shows that using those new instruction set extensions like MMX and SSE2 does not always even make the code faster. SSE2 can do 64-bit math natively, so there is less code involved in adding 64-bit numbers, and it should be faster in theory, but not always in practice. The user must always have the option to build and optimize the code for their own computer, even when the computer is "old" or something else than intended by the original developer of the software. |
fritz.mueller
Munich, Germany, 28.03.2024, 21:19
@ samwdpckr
|
I made my own DOS implementation |
Only a few bug reports:
a) After installation of latest version including leetos I removed the diskette (controller still inside), rebooted and got a freeze.
Reason: HD changed to A: leetos still looks for "cd c:\leetos" instead of "%COMDRV%\leetos" and freezes.
b) It also freezes when I enter Z:\test (not existing drive)
c) edit: I tried to open A:\fdauto.bat (which does not exist), chose A:\autoexec.bat and get the error message: Syscall 44: Unknown function 09.
d) I entered mkeyb041 gr (german keyboard layout) and got a freeze.
e) I entered mkeyb050 gr /e 101keys keyboard and got a freeze.
e) I played 2kTetris (can be found at: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/games/tetris/
It worked, but when game was over, I entered "n" (for no more game) and got a:
"KERNEL>_(blinking cursor)"
So much for today. |
Rugxulo
Usono, 29.03.2024, 02:53
@ samwdpckr
|
I made my own DOS implementation |
> Nowadays many developers want to drop support of CPUs that don't have SSE2
> instructions, and they are basically using those new instruction set
> extensions as a tool to bully people who have an "old" computer that still
> has more than enough computing power for their use.
SSE2 is already ubiquitous. It's anything older than x64 v3 (AVX) that people are struggling with now. I don't get it, personally.
https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels
> They are reasoning it
> by saying that the new instruction set extensions make the program run
> faster, when the reality is that every new version of the program is always
> more bloated than the previous one. This culture is especially visible
> amongst the Rust gang and the developers of many graphical desktop
> libraries in the FOSS world.
Intel does fund a lot of developers, and Windows, Linux, Mac are all pretty snobbish in jumping on trends and throwing a lot of things away.
Having said that, with all the malware / ransomware in the world, I think a lot of concerns for older hardware and software are related to security. |
samwdpckr
29.03.2024, 18:00
@ fritz.mueller
|
I made my own DOS implementation |
I noticed that the symbolic links were completely broken. Now it seems to handle all corner cases correctly.
> Only a few bug reports:
> a) After installation of latest version including leetos I removed the
> diskette (controller still inside), rebooted and got a freeze.
> Reason: HD changed to A: leetos still looks for "cd c:\leetos" instead of
> "%COMDRV%\leetos" and freezes.
> b) It also freezes when I enter Z:\test (not existing drive)
Thanks. I fixed that.
> c) edit: I tried to open A:\fdauto.bat (which does not exist), chose
> A:\autoexec.bat and get the error message: Syscall 44: Unknown function
> 09.
I don't understand what you mean. But syscall 44 function 09 does not have to exist when MS-DOS compatible networking functions are not loaded, so that's not necessarily a bug in ST-DOS. Other DOS kernels just don't show that error message, and I intend to hide it in future versions.
> d) I entered mkeyb041 gr (german keyboard layout) and got a freeze.
> e) I entered mkeyb050 gr /e 101keys keyboard and got a freeze.
Memory control blocks in ST-DOS are different than they are in ST-DOS.
> e) I played 2kTetris (can be found at:
> https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/games/tetris/
> It worked, but when game was over, I entered "n" (for no more game) and got
> a:
> "KERNEL>_(blinking cursor)"
The game may have corrupted something in the memory. Not necessarily a bug in ST-DOS.
> Having said that, with all the malware / ransomware in the world, I think a
> lot of concerns for older hardware and software are related to security.
Instruction set extensions have nothing to do with security. If anything, unconditionally requiring these new instruction set extensions worsen the security, because that prevents some people from updating the software to the latest versions. |
Rugxulo
Usono, 29.03.2024, 22:01
@ samwdpckr
|
I made my own DOS implementation |
> > Having said that, with all the malware / ransomware in the world, I think
> > a lot of concerns for older hardware and software are related to security.
>
> Instruction set extensions have nothing to do with security. If anything,
> unconditionally requiring these new instruction set extensions worsen the
> security, because that prevents some people from updating the software to
> the latest versions.
I think I meant older cpus are quietly deprecated due to such bugs (overeager speculative execution or whatever). There are many workarounds in the Linux kernel for such flaws, but the "oldest" Linux kernel still actively maintained (on kernel.org) is 4.19 from Oct. 2018. I'm sure distributions can still patch other kernels if needed.
But overall it's sometimes hard to find sympathy for old or weak tech. If it isn't actively sold, maintained, or doesn't receive security fixes (or microcode updates), some people refuse to care.
In a perfect world, any tech that could work should be potentially good enough (and not worthy of scorn).
I saw someone on Twitch stream Quest for Glory (EGA) via DOSBox. As you know, VGA was vastly more popular and easier to program. Still, EGA works on more machines and 320x200x16 colors is still "good enough" for some things. My point is we don't always forcibly have to use the "latest greatest", even if everyone else seems to be doing it. (We may both be preaching to the choir here.) |
samwdpckr
29.03.2024, 22:31
@ Rugxulo
|
I made my own DOS implementation |
> I think I meant older cpus are quietly deprecated due to such bugs
> (overeager speculative execution or whatever).
Actually that's more of a problem with newer CPUs. Old pre-SSE2 CPUs don't do such things.
I just find it weird when a software developer, who doesn't even write assembly, suddenly says that someone else's computer is "too old" or "too slow" and must not be "supported" anymore. If we consider that that "someone else's computer" is a turing-complete machine that has enough memory to run the program in question, and has enough computing power that its owner does not consider it "too slow", the software developer should not have the right to say that it is.
The job of the software developer is to just maintain the program, and generally when we are speaking of relatively high-level userspace programs or their libraries, they compile to older x86 hardware just fine, unless something is actively preventing it from happening. Breaking compatibility with 32-bit x86 CPUs and/or pre-SSE2 CPUs will most likely also break compatibility with many non-x86 targets.
If the software developer only does their job and maintains the program so that it works efficiently, it also saves the environment because the users don't need to buy new computers to be able to continue using the program. In that case the program also consumes less electricity on newer machines. |
Rugxulo
Usono, 30.03.2024, 10:16
@ samwdpckr
|
I made my own DOS implementation |
> Nowadays many developers want to drop support of CPUs that don't have SSE2
> instructions, and they are basically using those new instruction set
> extensions as a tool to bully people who have an "old" computer that still
> has more than enough computing power for their use.
Those kinds of people upgrade every few years, maybe because of their workplace forcing them. They probably don't have the same import taxes or high shipping costs as you.
Also, when a compiler (or library) drops support for a certain OS or cpu family, someone else has to take up that burden or leave it abandoned. If one piece is missing, it's easy for the whole thing to crumble. "Tier 1" is almost always AMD64 (v2?) and maybe ARM64.
> They are reasoning it
> by saying that the new instruction set extensions make the program run
> faster, when the reality is that every new version of the program is always
> more bloated than the previous one. This culture is especially visible
> amongst the Rust gang and the developers of many graphical desktop
> libraries in the FOSS world.
CLMUL (circa 2010) claims to be faster. Have I ever used it? No.
I did, years ago, find 2x speedups in PAQ8 with his SSE2 patches. Actually, at that time, MMX or SSE2 was roughly the same speed. SSE2 improved more later. But my fork wasn't locking anyone in to specific OSes or cpus. (His default build was MinGW G++ 3.4 targeting MMX using MVSCRT.DLL that had a stupid tmpfile() bug on non-Admin Vista, so I rebuilt with newer DJGPP using CPUID. And I wanted CWSDPMI support so that it could swap to disk if needed. OpenWatcom/Causeway also worked but much less efficiently [worse malloc, slower swapping].).
Oscar Wilde said, "A cynic is someone who knows the cost of everything and the value of nothing."
> They are purposely writing code that breaks when sizeof(long) is not 64,
> and are writing makefiles that override the CFLAGS that the user has set,
> resulting in a binary that has a different CPU target than was intended by
> the user. This type of behaviour is unacceptable. When the program has zero
> lines of assembly, there is absolutely no reason to artificially limit its
> portability by preventing building it to certain CPU targets.
While I 100% agree, strict ISO C has never been meant to be portable. They explicitly allow non-portable things. (Heck, PAQ8 was pre-C++11 and didn't compile on AMD64 due to such bugs. It was also single core only and very slow but still very good.)
Actually, even ISO C only mandated that "long double" equal double, which is what MSVCRT does (or did, in the old days), same with OpenWatcom (unlike GCC). And I don't think AMD64 supports beyond "double" directly anyways. So FPC's Extended type is unsupported on AMD64. (AVX probably fixes that, but that's yet another can of worms.)
I believe the Linux kernel is roughly C11 (without VLAs) with GCC extensions of course.
> And my example shows that using those new instruction set extensions like
> MMX and SSE2 does not always even make the code faster. SSE2 can do 64-bit
> math natively, so there is less code involved in adding 64-bit numbers, and
> it should be faster in theory, but not always in practice. The user must
> always have the option to build and optimize the code for their own
> computer, even when the computer is "old" or something else than intended
> by the original developer of the software.
Old also means a lot of things. I remember my P4. It was single core only (mainly noticeable during antivirus scans, ugh). It had some USB 1.1 (slower) and some USB 2 ports. It was using a traditional hard drive (non-SSD). The 512 MB of RAM was probably DDR2. You can compare cpus from 2004 to 2020 to see a difference.
> I just find it weird when a software developer, who doesn't even write
> assembly, suddenly says that someone else's computer is "too old" or "too
> slow" and must not be "supported" anymore. If we consider that that "someone > else's computer" is a turing-complete machine that has enough memory to run
> the program in question, and has enough computing power that its owner does
> not consider it "too slow", the software developer should not have the right > to say that it is.
IIRC, the PlayStation 5 and XBox Series X are both Ryzen 2 with 16 GB of RAM. Compare that to 2013-era PS4 and XBoxOne. My point is that nothing is ever good enough for some people. (But they do try to do a lot more with HD/UHD/4k/whatever.)
> The job of the software developer is to just maintain the program, and
> generally when we are speaking of relatively high-level userspace programs or > their libraries, they compile to older x86 hardware just fine, unless
> something is actively preventing it from happening. Breaking compatibility
> with 32-bit x86 CPUs and/or pre-SSE2 CPUs will most likely also break
> compatibility with many non-x86 targets.
For many years the world has already tried to deprecate and kill IA-32. Ubuntu doesn't support it anymore (since 2019?). Like I said, people are arguing over whether x64 v3 should be the new baseline or not.
They don't really sympathize with people who don't upgrade every few years. (I would say every five years would be nice, but it's usually shorter than that.) A lot of software won't work on Win7 or 8 anymore. Heck, for laughs, I sent my mother an email about Cygwin. They used to support Win9x. Then it was XP (for Unicode). Then they dropped IA-32. Then they dropped older OSes (like Win7/8).
Linux is all about choice. But often that choice is "not invented here!" or "UNIX only!" or "we don't care about anything before 2017!". Roll your own, develop your own ... easier said than done. Face it, "portability" is not a priority for most people.
To some people, RAM unused is wasted. You know, ulimit says their stack is "unlimited", and virtual memory and AMD64's 48-bit addresses just encourage wasteful practice. (Granted, trying to do everything in 64 kb is unrealistic. But a home user REALLY shouldn't need 64 GB of RAM for anything!) I'm not saying we should all target 286s or use 486s to identify our algorithms' bottlenecks. But meh, the modern world is not optimal. ("We don't have time! It costs money! We don't have a test setup!")
You know what's slow? Recompiling headers over and over again. (But C++20 fixed that. "Just throw more cores at it!") Garbage collection. Bytecode. Quadratic algorithms. Cpu cache thrashing. AutoTools/Configure under Cygwin. Slow software emulation (because xyz isn't supported anymore). Bugs. Limits.
The whole point of "computer science", I thought, was to not be tied to any specific technology. In other words, don't rely too heavily on any cpu or OS or compiler or language dialect. They don't last. You have to do everything yourself. (Well, sometimes other people help.)
But yes, talk is cheap, and "upgrade!" is never useful advice. |
samwdpckr
30.03.2024, 20:09 (edited by samwdpckr, 30.03.2024, 20:32)
@ Rugxulo
|
I made my own DOS implementation |
> For many years the world has already tried to deprecate and kill IA-32.
> Ubuntu doesn't support it anymore (since 2019?). Like I said, people are
> arguing over whether x64 v3 should be the new baseline or not.
I remember when Ubuntu used it as a "marketing advantage" against Windows that Ubuntu works on older computers. Nowadays Ubuntu is a complete opposite of what it once was. It is so bloated that even new cheap laptops cannot run it anymore.
And 32-bit x86 Linux desktop is already completely broken because of those idiots who insist that everything must be compiled for at least SSE2 targets. Literally nothing works and everything just throws invalid opcode errors. When the compiler is asked to generate i686, i586 or even i486 code, there is still SSE2 instructions everywhere, because the CFLAGS are overridden in multiple lines of the build scripts. It is a lot of work to make them actually work and then the next software update breaks it again.
At least command line tools still work on a Pentium II, which also has easily more than enough computing power for them. Everything happens instantly and the difference to new computers is almost unnoticeable.
The situation is already so crazy that I'm scared of installing software updates on a computer that is only ten years old and has an i7-based Xeon CPU, because I fear that the new version of some important package has instructions that my CPU does not support.
You cannot trust the archname in the name of the software package anymore. "i686" does not actually mean i686 and x64 may be basically anything.
> To some people, RAM unused is wasted.
I thought that the one of the basic ideas of multitasking is that one program cannot consume all memory of the computer. Aren't se supposed to have multitasking computers in 2024? |
Oso2k
31.03.2024, 21:22
@ samwdpckr
|
I made my own DOS implementation |
> And 32-bit x86 Linux desktop is already completely broken because of those
> idiots who insist that everything must be compiled for at least SSE2
> targets. Literally nothing works and everything just throws invalid opcode
> errors. When the compiler is asked to generate i686, i586 or even i486
> code, there is still SSE2 instructions everywhere, because the CFLAGS are
> overridden in multiple lines of the build scripts. It is a lot of work to
> make them actually work and then the next software update breaks it again.
I think you’re making an assumption here. At least with gcc, binutils, and djlsr (djgpp’s bsd derived libc), you tell exactly build-djgpp builds and injects ISA support. Rich Felker’s cross compiler scripts do something very similar for musl libc and Linux.
https://github.com/andrewwutw/build-djgpp/issues/45
https://github.com/richfelker/musl-cross-make |
Rugxulo
Usono, 31.03.2024, 22:59
@ Oso2k
|
I made my own DOS implementation |
> At least with gcc, binutils, and djlsr (djgpp’s bsd derived libc),
> you tell exactly build-djgpp builds and injects ISA support.
>
> https://github.com/andrewwutw/build-djgpp/issues/45
Just FYI ....
GNU sed (and FreeBSD sed) support a non-standard -i switch to edit "in place", thus no need for a temporary output file and mv afterwards. (If you use -i~, it will create a backup file first.) |
samwdpckr
01.04.2024, 16:31
@ samwdpckr
|
I made my own DOS implementation |
I wrote the basic utilities TREE, DELTREE, ATTRIB, XCOPY and DIRFIND (called when DIR /S command is given to the command prompt). DELTREE and XCOPY were important, because the graphical interface calls them when copying entire directories. I also added a non-standard syscall that changes the timestamp of a directory entry. |
Rugxulo
Usono, 08.04.2024, 03:19
@ samwdpckr
|
I made my own DOS implementation |
> I just find it weird when a software developer, who doesn't even write
> assembly, suddenly says that someone else's computer is "too old" or "too
> slow" and must not be "supported" anymore. If we consider that that
> "someone else's computer" is a turing-complete machine that has enough
> memory to run the program in question, and has enough computing power that
> its owner does not consider it "too slow", the software developer should
> not have the right to say that it is.
The CDC 6600 mainframe from 1964 was a 60-bit processor at 10 Mhz up to 982 kb of memory (almost a meg??) running 2 MIPS. It cost $2.4 million (equivalent to $23 million today).
Wikipedia says:
"Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark, giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter to 2,403 MWIPS using a modern C/C++ compiler."
In fairness, you shouldn't have to be a systems engineer with a PhD just to use or program a computer. Having said that, choice of OS, compiler, and algorithm make a big difference (depending on the requirements). It's NOT true that everyone born before 1990 was a rube who knew nothing, nor that their machines were incapable of "real work".
> The job of the software developer is to just maintain the program, and
> generally when we are speaking of relatively high-level userspace programs
> or their libraries, they compile to older x86 hardware just fine, unless
> something is actively preventing it from happening.
C is much better than assembly for portability, but it's still VERY flawed. A skilled developer can work around most of that, but most people don't care. It is NOT "strictly conformant by default". No amount of "standards" or warnings or lints or libraries can hide all of that for you. You just have to "test test test!" and work around any problems you find (whether preprocessor macros, patches, separate modules, libraries, external tools, etc).
So-called "UNIX" or "UNIX-like" or "POSIX" or sticking to GNU tools or Ubuntu LTS does not save you from portability problems. There used to be a saying: "all the world's a VAX!" (see the Jargon File). VMS and Windows NT were both from Dave Cutler and somewhat considered anti-UNIX in philosophy. In other words, OS-specific code is nothing new. (And yet Cutler insisted that Windows NT be portable from the beginning. Too bad most other cpu makers like MIPS, Alpha, PowerPC couldn't compete with Intel.)
EDIT: You may prefer to hang around more sympathetic people (e.g. NetBSD or OpenBSD) rather than Linux. |
fritz.mueller
Munich, Germany, 17.05.2024, 12:49
@ Rugxulo
|
I made my own DOS implementation |
Hi,
I ran one more test of ST-DOS with version 2024-05-16 in virtualbox.
I added some DOS programs to see what works and what not.
a) mkeyb gr lead to a blinking cursor, nothing else (I think I already
reported this). It would be great to have at least some other keyboards than US and finnish.
b) fdshell 0.10 (dosshell) lead to a freeze after the first basic information.
dosshell /? showed the help and exited.
c) fd edit worked but froze at "save as - filename" After a regular exit without saving I got an unknown system call 38Child exited with return code 0
d) doszip (dz) lead to a blinking cursor after start.
e) ctmouse seems to work
f) editor Blocek lead to a blinking cursor.
g) game blkdrop showed some code and exited with: Child exited wirth return code 243
h) game flpybird lead to a message: Child exit
i) game sayswho seemed to work - not absolutely sure - but after exit (ESC) the whole sourcecode ran through on screen and I got a message Child exited with return code 0
j) game senet seemed to work.
k) NewDOS browser seems to work but after exit the whole sourcecode ran through.
l) NewDOS calculator seems to work, after exit sourcecode ran through.
m) NewDOS scandrv -> same
n) NewDOS cleandsk -> same
o) NewDOS scanfix: Syscall44: Unknown function09
p) NewDOS setdate: changed Date but after exit sourcecode ran through.
q) It would be nice if "dir" would support "dir /w". |
samwdpckr
08.09.2024, 01:32 (edited by samwdpckr, 08.09.2024, 01:59)
@ samwdpckr
|
I made my own DOS implementation |
I noticed mbbrutman's NetDrive and got some inspiration from it. I created NETDISK. Now ST-DOS has the power of cloud too.
Download page:
http://sininenankka.dy.fi/leetos/netdisk.php
Because ST-DOS has a different device driver API than MS-DOS and its 100% clones, NETDISK only works on ST-DOS. The way it works is pretty similar to mTCP NetDrive - the server computer has disk images and they are shared to the client computers. There are some differences though. Although NETDISK and mTCP NetDrive are products that serve different purposes and are not competing against each other, some comparisons between them can still be made:
NETDISK's pros, compared to mTCP NetDrive:
+ Works with all filesystem types (because ST-DOS has a filesystem driver API)
+ Has dynamic caching, which probably makes NETDISK faster
+ The server works on both DOS and POSIX-compatible operating systems like Linux (compiling it to DOS needs the socket.h header from ST-DOS's TCP/IP stack and its sockhdr.obj object file)
+ Works with almost indefinitely large drives (ST-DOS uses 64-bit offsets for device files)
Cons:
- Only works with ST-DOS
- No advanced features like sessions and "undo" - the server program is currently very simple and doesn't have a lot of code
- The device driver does not currently support unloading, a reboot is needed
I had to make some fixes to the kernel and the TCP/IP stack to get NETDISK working, so it only works with the newest versions of them. |
samwdpckr
31.10.2024, 22:29
@ samwdpckr
|
I made my own DOS implementation |
I made a video of installing ST-DOS and NETDISK.
https://www.youtube.com/watch?v=iW4KVJcfmz8 |