Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to index page
Thread view  Board view
samwdpckr

21.12.2025, 01:56
 

Fallacies that advocate software bloat (Miscellaneous)

Give me more fallacies to debunk!

http://sininenankka.dy.fi/leetos/swbloat.php

Rugxulo

Homepage

Usono,
22.12.2025, 12:01

@ samwdpckr
 

Fallacies that advocate software bloat

A Plea for Lean Software -- Niklaus Wirth, 1995

Rugxulo

Homepage

Usono,
25.12.2025, 09:09

@ samwdpckr
 

Fallacies that advocate software bloat

Some people (proudly!) have an aversion to "old" things, even if "older" works perfectly fine (or better!). Apple seems to give up on things after seven years. Java seems to deprecate 10+ year-old code. Even Linux now only supports being built with GCC 8 at minimum.

The other bad idea is always "hardware is cheap" (e.g. RAM), but that is clearly not true anymore. RAM prices are going WAY up! I honestly hope this causes some of them to optimize their software more instead of being wasteful. Some people (even Linus himself) say that unused RAM is wasted, but the idea that you can (or should) infinitely add more RAM to x64 is just plain bonkers.

On UNIX, ed would read the whole file into memory. Thus, sed was born to only read a line at a time (for bigger files). Gone are the days when slim software was valuable. Gone are the days of trying to cram something useful into CP/M (or DOS 640k or IA-32's 4 GB of RAM). Firefox doesn't even make 32-bit builds on Linux anymore.

I'm not really raging, but it's sad how bloated we've gotten. Optimization takes time, and some people just aren't willing to do it.

bencollver

Homepage

25.12.2025, 16:28

@ Rugxulo
 

Fallacies that advocate software bloat

> On UNIX, ed would read the whole file into memory. Thus, sed was born to
> only read a line at a time (for bigger files). Gone are the days when slim
> software was valuable. Gone are the days of trying to cram something useful
> into CP/M (or DOS 640k or IA-32's 4 GB of RAM). Firefox doesn't even make
> 32-bit builds on Linux anymore.
>
> I'm not really raging, but it's sad how bloated we've gotten. Optimization
> takes time, and some people just aren't willing to do it.

I enjoyed your "not rant". Regarding UNIX ed(1) reading the whole file into memory, i imagine one could use mmap(2) on newer hardware to let the OS manage which parts of the file are in memory or not.

Optimization is a response to constraints. Severe constraints force "rationing" of resources, similar to fiscal responsibility. There can be a zen-like beauty in choosing what to omit, versus Everything Everywhere All Of The Time!!@!

Under a minimalist philosophy that appreciates zen-like beauty, ed(1) might be seen as a tool intended for editing source code. Having a single source file larger than total memory might be taken as a sign that the code needs to be re-factored into a more modular design. :-P

"Did it break when you did that? Don't do that then."

ecm

Homepage E-mail

Düsseldorf, Germany,
25.12.2025, 17:56

@ bencollver
 

Advocate software bloat

> Optimization is a response to constraints. Severe constraints force
> "rationing" of resources, similar to fiscal responsibility. There can be a
> zen-like beauty in choosing what to omit, versus Everything Everywhere All
> Of The Time!!@!

I am a little inclined to my "go everywhere, do everything" mantra, refer to my blog.

The latest outgrowth of this is the Master Control Program (lDOS MCP), named as a joke, which includes three kernels (plus a boot menu to select one of the three), plus a device driver, plus three DOS applications, plus more than 60 Extensions for lDebug, and optionally plus a 7zball of sources, all in a single file.

This single-file approach means it cannot be loaded by some FreeDOS loaders as a kernel (<= 128 KiB limit), and likewise not as a device driver by some DOS versions, because these check or want to load the entire file rather than just the necessary parts.

> Under a minimalist philosophy that appreciates zen-like beauty, ed(1) might
> be seen as a tool intended for editing source code. Having a single source
> file larger than total memory might be taken as a sign that the code needs
> to be re-factored into a more modular design. :-P

I did break apart the debugger into many source files over the years, for instance:

ldebug/source$ find . -iname '*.asm' -o -iname '*.mac' > 20251225.fil
ldebug/source$ cntlines -q -u @20251225.fil
Files:          183
Bytes:          2995368
Total lines:    141597
 Blanks:        17187
 Comment only:  24698
 Actual code:   99712
ldebug/source$ find . -maxdepth 1 -iname '*.asm' -o -iname '*.mac' > 20251225.loc
ldebug/source$ cntlines -q -u @20251225.loc
Files:          65
Bytes:          1821955
Total lines:    82806
 Blanks:        9122
 Comment only:  18592
 Actual code:   55092
ldebug/source$


But if we look at the biggest files there's still several ones exceeding 64 KiB:

ldebug/source$ ls -lgG *.asm *.mac --sort=size | head -n20
-rw-r--r-- 1 194502 Dec 25 17:36 debug.asm
-rw-r--r-- 1 173458 Dec 25 17:36 run.asm
-rw-r--r-- 1 168800 Dec 25 17:36 init.asm
-rw-r--r-- 1 150954 Dec 25 17:36 expr.asm
-rw-r--r-- 1 133923 Dec 25 17:36 boot.asm
-rw-r--r-- 1 113518 Dec 25 17:36 lineio.asm
-rw-r--r-- 1  98915 Dec 25 17:36 uu.asm
-rw-r--r-- 1  79351 Dec 25 17:36 aa.asm
-rw-r--r-- 1  63186 Dec 25 17:36 msg.asm
-rw-r--r-- 1  58218 Dec 25 17:36 symbols.asm
-rw-r--r-- 1  54519 Dec 25 17:36 rr.asm
-rw-r--r-- 1  45499 Dec 25 17:36 bb.asm
-rw-r--r-- 1  43291 Dec 25 17:36 dd.asm
-rw-r--r-- 1  37646 Dec 25 17:36 debug.mac
-rw-r--r-- 1  27025 Dec 25 17:36 mm.asm
-rw-r--r-- 1  25347 Dec 25 17:36 immasm.asm
-rw-r--r-- 1  23962 Dec 25 17:36 ssshared.asm
-rw-r--r-- 1  23197 Dec 25 17:36 dishared.asm
-rw-r--r-- 1  18319 Dec 25 17:36 serialp.asm
-rw-r--r-- 1  16510 Dec 25 17:36 ints.asm
ldebug/source$

---
l

bretjohn

Homepage E-mail

Rio Rancho, NM,
27.12.2025, 04:43

@ bencollver
 

Fallacies that advocate software bloat

> Under a minimalist philosophy that appreciates zen-like beauty, ed(1) might
> be seen as a tool intended for editing source code. Having a single source
> file larger than total memory might be taken as a sign that the code needs
> to be re-factored into a more modular design.

Breaking up the source into smaller "modules" doesn't actually decrease the bloat -- it just hides it.

bencollver

Homepage

29.12.2025, 03:51

@ bretjohn
 

Fallacies that advocate software bloat

> Breaking up the source into smaller "modules" doesn't actually decrease the
> bloat -- it just hides it.

I thought joining all of the modules into one single source file would increase the amount of memory required to compile the program, versus compiling a bunch of little objects and linking them together.

Also, smaller modules would be easier to grasp in the biological memory of the software maintainer. I guess that would be "hiding" the complexity, kind of like OOP design.

ecm

Homepage E-mail

Düsseldorf, Germany,
29.12.2025, 13:01

@ bencollver
 

Fallacies that advocate software bloat

> > Breaking up the source into smaller "modules" doesn't actually decrease
> the
> > bloat -- it just hides it.
>
> I thought joining all of the modules into one single source file would
> increase the amount of memory required to compile the program, versus
> compiling a bunch of little objects and linking them together.

The debugger's main source (files not in source/eld/ or source/help/) is actually assembled as a single stream of source text, using text %include directives from debug.asm to load everything. The advantage is that you can do things with the multi-section format not supported by an assembler + OMF linker combination. The disadvantage is it can take as long as 30s and as much as 128 MiB to assemble the entire image.

> Also, smaller modules would be easier to grasp in the biological memory of
> the software maintainer. I guess that would be "hiding" the complexity,
> kind of like OOP design.

Indeed, this is why I split the source text into many files.

---
l

marcov

31.12.2025, 15:11

@ bencollver
 

Fallacies that advocate software bloat

> > Breaking up the source into smaller "modules" doesn't actually decrease
> > the bloat -- it just hides it.

Bloat is an unscientific term. But modularisation has very small overheads (like per module ctor/dtor, specially when table based). Some of those can be mitigated by global optimisations.

> I thought joining all of the modules into one single source file would
> increase the amount of memory required to compile the program, versus
> compiling a bunch of little objects and linking them together.

That depends. The compiler might eat more memory in the single source file case, but more labels have been resolved in the .o that it provides and the linker might consume less memory (*)

If half of your code is in static inline in other modules headers, you still got almost the entire program to process in your main module.

Optimisations over module borders are relatively expensive. But it is a trade off for a better result. Note in the single file case, multi pass compilers kind of do similar things.

> Also, smaller modules would be easier to grasp in the biological memory of
> the software maintainer. I guess that would be "hiding" the complexity,
> kind of like OOP design.

Yes. A bit of encapsulation and information hiding. Also promotes reuse a bit more.

(*) Since gcc/binutils afaik still doesn't have section smartlinking on Windows, creating a .o per symbol is the only way to have fine grained smartlinking. I've seen LD.exe use 1.5GB to link a 10-15MB EXE file (or 2.5 times that with debuginfo) in such case. If we hadn't implemented an own linker, we probably would use a 64-bit hosted linker to generate 32-bit binaries by now (36M EXE, 500MB magnitude with debug info)

samwdpckr

26.12.2025, 09:39

@ Rugxulo
 

Fallacies that advocate software bloat

> The other bad idea is always "hardware is cheap" (e.g. RAM), but that is
> clearly not true anymore.

Arguably it has never been true at all. We have recently had a very short time in history where one could buy a new computer for less than 300 euros or US dollars, but those computers were never powerful enough to run any modern software at decent speeds. They typically had just enough memory to boot the pre-installed version of Windows and maybe run one program at time. No real multitasking capability. Back then one could make them usable by replacing the factory-installed operating system with some Linux distribution, but now even Linux has abandoned its former ideals and it doesn't work like that anymore.

On the other hand phones became very expensive when the age of cheap Nokia phones ended. Back then you could buy a basic phone with 40 euros and it lasted easily six years or more. Nowadays the surrounding society actively makes your life very hard if you don't have one of the newest Android/iOS phone models, which are very expensive. Even the so-called cheap models easily cost at least 200 euros and they are full of spyware. And they are considered durable if they last two years.

Not to mention that 200 or 300 euros or US dollars is also way too expensive for many people.

kerravon

E-mail

Ligao, Free World North,
31.12.2025, 19:29

@ samwdpckr
 

Fallacies that advocate software bloat

> On the other hand phones became very expensive when the age of cheap Nokia
> phones ended. Back then you could buy a basic phone with 40 euros and it
> lasted easily six years or more. Nowadays the surrounding society actively
> makes your life very hard if you don't have one of the newest Android/iOS
> phone models, which are very expensive. Even the so-called cheap models
> easily cost at least 200 euros and they are full of spyware. And they are
> considered durable if they last two years.

I don't know if Europe is different from Australia, but here:

https://www.officeworks.com.au/shop/officeworks/p/opel-mobile-s55-android-smartphone-black-opels55

is a brand new Android phone for AUD$108 = US$72

4 GB RAM
32 GB storage
unlocked

I agree that 32 GB storage is pretty tough. It depends on
exactly what you need to be able to do.

If you want to run PdAndro, it's plenty.

BFN. Paul.

DOStuff

28.12.2025, 11:20

@ samwdpckr
 

Fallacies that advocate software bloat

> Give me more fallacies to debunk!

You should probably add security, to your list. But not without providing your own insight, on it.

Something that is not a fallacy, is that software bloat can be a mess to maintain. Bloat, and complexity of maintenance, provide corporate advantages; especially in the face of the opensource code. I question, if this isn't the real reason for interest in Rust. But that really depends on what features of the language your are using. If you are aiming from extreme security features (as it is claimed anyway) that code will be harder to maintain and harder to fork (hard directional design forks). However, one of the reasons for pushing Rust, might be the idea that AI code could need less auditing. The idea of saving the money, because time earned experience is obsolete.

Despite having no way to mitigate some of the notable CPU ?flaws?, how susceptible is DOS to them? This isn't a fair comparison, due to the limited usability "of DOS" in the roll of a server. But, that is were you begin to unravel the real issue. These modern approaches, themselves, are the security nightmare.

There is an existing disgust, for operating systems running in ring-0, if connected to the Internet. I really more prefer the idea that ring-0 privileges come from direct hardware keyboard access. From there on out, what you do is on you. But instead, the secure idea is that ring-0 is attained from dynamic privilege access. There are practical reasons for this, but it doesn't change the inherent nightmare that comes with that. No matter how responsible you are, with the applications you configure and run, your kernel is intentionally built with hard coded backdoors (role-the-dice, or maybe even less secure than that).

The idea used to be, protect the system from bad code, that might compromise system integrity/stability. This was added, at a hardware level, and has pros and cons. I wonder what an OS designed for unreal mode might be like. The amount of code included to protect the system from userspace is crazy. Everything you add, is itself a potential vulnerability; and must be maintained to match future secure code modifications. Add to this, that now, security must be considered everywhere you look.

There are practical reasons for these modern designs, even though I disagree with them. I really consider computers a tool, not something that the infrastructure/security of the world should depend on. If we are reaching a point where computers are the only option, we are doing it wrong.

The elephant in the room is the aging idea of "personal computing". Nothing modern is indented to provide a person with a "personal computer". You are buying a social device. I don't mean "person to person" socializing, though that does play a large role. More and more, the software (and somewhat the hardware) is being designed with usability that highly merits network/Internet access. Some devices are near worthless, without Internet access.

All of this creates a complex and hairy ecosystem, that adds convenience, at an extreme cost to simplicity. All of this complexity is fine, if you just "go along, to get along". But, that is not "personal computing". You must move with the herd, and that can mean both software and hardware.

Software wise, we all took the convenience bait, even if we still prefer simplicity. The consequence there is that, the development of true alternatives fell away/behind. Probably the biggest cause is that, there wasn't a big enough market for it. But, corporate interests have the upper-hand, with greater control over available hardware. While that advantage has been abused, realistically, it could have been abused more heavily; and it is about to be.

What will remain of personal computing, when the computer you use must be authenticated to your legal identity; and any alteration of the device, or its software, voids its ability to connect to the Internet, is illegal, or causes the device to break upon detection? Some will have old devices, an those that we manage to modify, for non networked use. Will we even be able to distribute software "online" for non-secured hardware, and operating systems? If the software is for a modded device, it might be illegal. It my seem extreme, but older operating systems and hardware might be considered security threats. That might be a long ways out; but its approach could be greatly accelerated by worldly catastrophic events, of a digital security kind. Hmm, not that it looks inevitable or anything.... If not for something dramatic, we have had a pretty good track record, of moving away from things really slow. Maybe that will help.

The biggest fallacy, that supports software bloat, is that we should "go along, to get along".

samwdpckr

29.12.2025, 02:52

@ DOStuff
 

Fallacies that advocate software bloat

> There is an existing disgust, for operating systems running in ring-0, if
> connected to the Internet.

Doesn't the TCP/IP stack usually work in ring-0 on all modern operating systems?

DOStuff

31.12.2025, 09:01

@ samwdpckr
 

Fallacies that advocate software bloat

> Doesn't the TCP/IP stack usually work in ring-0 on all modern operating
> systems?

Yes, badly worded. Userland networking/Internet applications running in ring-0.

kerravon

E-mail

Ligao, Free World North,
31.12.2025, 19:53

@ DOStuff
 

Fallacies that advocate software bloat

> There are practical reasons for these modern designs, even though I
> disagree with them. I really consider computers a tool, not something that
> the infrastructure/security of the world should depend on. If we are
> reaching a point where computers are the only option, we are doing it
> wrong.

I like the sound of this.

Here is something that I consider to be the bottom line:

https://www.bbc.com/news/articles/cly64g5y744o

Someone was forced to use pen and paper.

While modern systems work great - when they work great -
I see a need for a "backstop".

Something between a modern system which is too complicated
to understand or maintain, and a pen and paper which is actually
understood.

I normally try to put PDOS/386 in there. But it could be MSDOS
instead.

My wife thinks that PDOS is a joke - why would anyone want to
use that when they can instead use xyz (some modern app that
she uses)?

And I tell her that I'm not trying to compete against xyz. I'm trying
to compete against NOTHING AT ALL.

I don't know the circumstances, but if you were to wake up one day
and it turns out that your current software all had ceased working
because the OS vendor had a "drop dead" date in the software,
what are you going to do?

Pen and paper is the only thing you can use, unless you have PDOS
or similar.

And the funny thing is, that ideally this situation would never occur.
The backstop should ideally never be needed. So no-one will ever
see that situation. And PDOS should thus ideally, be useless.

I do know of one situation, where it acts as a sort of backstop. Here:

https://www.liberatorswithoutborders.org/omelette.txt

It puts an end to the "means of production" argument. So wipes out
Marxist theory. Unless I'm missing something. Regardless of whether
or not it is actually used.

BFN. Paul.

RayeR

Homepage

CZ,
17.01.2026, 17:11

@ samwdpckr
 

Fallacies that advocate software bloat

> New computers are more efficient than old ones;

I think they are (the HW), if we measure computing power of modern CPUs MIPS per watt they are definitelly better than old ones. And now you can choose from huge scale of computing power from very low consumption Quark, Vortex, ULV Atoms and i3 to huge multicore monsters. But the problem is that bloatware demands more computing powet for the same tasks than before so the HW progress is neutralised, sometimes... I can imagine one writing a letter in MS word office 95 at windows 95 running on some Pentium 100 32MB RAM, 30 years later one doing the same in Office 365 on some 64-bit corpo laptop i5 4GHz 16GB RAM, did the productivity of writting changed reflecting the computing scale? Maybe now with AI, now it writes letter for you based od simple request :)

---
DOS gives me freedom to unlimited HW access.

Rugxulo

Homepage

Usono,
18.01.2026, 16:54

@ RayeR
 

Fallacies that advocate software bloat

> > New computers are more efficient than old ones;

Supposedly a 486 (in-order cpu) was much more energy efficient than later out-of-order (686 etc.) cpus. Granted, I'm thinking more of battery life, but I don't have any concrete proof of that.

Obviously the early Intel Atoms were in-order too. The Pentium Pro (686) had 4-1-1 micro ops (out of order) and even the original Pentium (superscalar) had U and (weaker) V pipes. I think it's normally called "speculative execution" when the cpu guesses results before you ask for it. (Dunno, I'm not really a professional optimizer.)

The Nintendo Switch running Tears of the Kingdom supposedly gets like three hours battery life! That's not great. People made fun of older handhelds (e.g. Sega Game Gear or Atari Lynx) that were actually longer lasting than that. My 2022 Intel laptop barely gets a few hours battery life, but ARM devices (e.g. my old Android tablet) were much much more efficient. (And yet Switch uses ARM64, go figure.) And yet my Intel Celeron Chromebook used to get much better battery life too, but it's VERY short-lived and kinda a throwaway device. Most devices nowadays seem to be.

I guess there is no perfect answer or we all would've solved it by now.

Back to index page
Thread view  Board view
23153 Postings in 2179 Threads, 404 registered users (0 online)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum