Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

FPC for DOS / FreeDOS (DOSX)

posted by marcov, 15.04.2008, 16:11

> > two #includes of the same header can have totally different
> preprocessor
> > state (and thus preprocessed code) as result.
>
> So it's impossible without breaking things? (I dunno, honestly.)

No, just difficult and a lot of work. Since the "precompiled" header possibly depends on the entire context of the preprocessor state until then, the typical way is to reduce that context to the values that really matter for the header and store that. So then you can "simply" compare and decide to reinterpret or use precompiled.

Moreover, the state is not entirely random (certain inclusion orders are pretty constant), so a history how the context is built can also help.

I only know this from theory, how C compilers that do (gcc doesn't) do this, I don't know. ALso since most are proprietary.

> Kinda silly to buy a 10 x faster computer and then slow it down 10 x by
> more useless abstraction. :-/

If that was the factor, yes that would be said. Luckily it isn't

> From what I've (barely) learned recently: 486 was pipelined (vs. 386
> partially pipelined)

Yes, but only in instruction fetching. IOW the it still couldn't get faster than one instruction/clock. The 386 had an even simpler prefetch I believe.

But note the main thing here: my leaving assembler as main programming language, dates from 486 times (and partially earlier), not from Core2 times as the advocatist here try to insinuate

> while Pentium was superscalar (two pipelines),

Pentium was superscalar but very limited. (1 1/2 pipeline, the v one was very limited).

> and
> later models even moreso. The Pentium even had a pipelined FPU. And yet,
> there is plenty you can do to help a 486, 586, 686, etc. At least
> the Quake authors thought so
> (which used DJGPP for the compiler, BTW).

Actually it is kind of funny that I'm having this discussion. Last saturday I junked all old PCs. The slowest PC now is an Athlon XP2000+.

And yes, I remember the quake situation pretty well, since I was bitten badly by it. I bought not a pentium but a Cyrix (a P166+ for 60% of the price of a P133). Due to not having the same fpu/cpu mix tradeoffs, Quake was painful.

Of course when I used the difference in money to buy a voodoo card, it was maybe still a better deal :)

I do still have several slower computers, but they are not intel/PCs.

> 486s do indeed suffer from AGIs, just not as bad as Pentiums. And since
> acceptable 486 code will "by default" run much much faster (usually twice
> as fast or more), such penalties are easily offset. But yes, it's
> inherently hard to optimize, period! But it's still worthwhile, IMO.

On faster computers I meant. A misprediction gets more expensive with speedier comptuters.

> > For an assembler programmer you don't seem to know much about this
> > !?!?!?!
>
> I don't understand it fully (who can?), it's very complex. Plus, it's hard
> to prove it easily, so I tend to still look for easy answers. I know this,
> for sure: 386, 486, 586, 686 all require different measures of
> optimization. You can somewhat optimize for all of them, though.

Using the timestamp counters, one can benchmark sequences of instructions pretty well. But it is indeed hard. Trick is that not all code is important in that respect, and for the code that matters (like the typical RTL primitives) it is worth it. See e.g. the fastcode project.

> The "infinite" here is regarding the continual process of upgrading your
> processor every so often when a faster one is released (assuming your
> motherboard is compatible, ugh).

Even then. 10 years * (E100 for upgrade kit every 3-4 years) = E200-300. Not trivial, but certainly not infinite.

> IMO, they could use tons more help regarding their 586 or less
> optimizations. However, I'm unlikely to be of use in that regard (at
> least, not yet). :-/

That's because the people that are still interested in it, are wasting their time on writing anything in assembler :-) When they grow over it, they pursue other interests. Somehow that is tragic.

> Early PPros actually ran 16-bit code slower than later-model Pentiums!

No. All PPro's did. Only the P-II resolved this.

> (And yes, I have other computers too, including this P4, heh). They all
> act differently, but for sure they should mostly run things as
> fast, if not faster.

They all did, with exception of the upgrade from the C=64 to my first PC, a 386. While the 386 had 20 times faster clock, the software was way slower, I can clearly remember think PC games really sucked, despite the PCs larger power in every aspect (CPU, HD, memory, VGA, SB)

The main difference was that all C=64s were the same, and all software hugely took advantage of that. The chaos on the PC side really costs performance.

 

Complete thread:

Back to the forum
Board view  Mix view
22755 Postings in 2121 Threads, 402 registered users (0 online)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum