Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

Compatibility woes / deprecation (Miscellaneous)

posted by Rugxulo(R) Homepage, Usono, 20.02.2009, 00:24

> > I don't think Intel ever intended it to be
> > mainstream for home users, esp. since they sold lame-o P4s from 2000
> until
> > 2006 (Core) although Xeons supposedly had AMD's x86-64 in 2004 or so.
>
> Yes they did. Why do you think they lamed on with P4 so long?

They must've had high expectations for it.

> And why do
> you think AMD originally came up with x86_64 and not inteL?

I dunno, market pressure? Ingenuity? Boredom? ;-)

> Correct, because Intel had a IA64 happy future. They even rebranded
> x86 as IA32 to provide marketing continuety.

I still highly doubt it. Do any home users of "normal" cpus have any IA64s? If not, then Intel never pushed it to the home market. Now, maybe x86-64 was AMD's competition to the supercomputer / server market, who knows.

> The trouble with Itanium was manyfold though. It required huge investments
> in compilers (and commercial tool vendors were already near bankrupt), the
> logic the VLIW design saved was dwarfed by the emergence of multi-megabyte
> cache and evaporated. Moreover, Intel was plagued with problems, and
> couldn't ramp up speed enough. (partially also because they had had to
> ramp up P4 much faster than planned due to sudden AMD competition).

A PIII was indeed faster per clock than P4, so only fast P4s really got better, and SSE2 made a difference (although required a code redesign due to quirks, ahem, stupid 16-byte alignment). This was due to the super long pipelines, right? (And no "barrel shifter.") And yes, I think the Athlon was a better bang for the buck at the time, possibly even the first to reach 1 Ghz, I forget. Only with Athlon XP did they get full SSE1, and only AMD64 got SSE2 (although I'm still unsure how many people use such, esp. since non-Intel compilers don't really target it).

> Intel is a large beast with departments competing, but HP was
> pushing itanium pretty badly.

But not for home use. The P4 was marketed to us home users, dunno why.

> > And they've hyped the Core 2 to death.
>
> Core-one was their rescue (out of fairly recently founded Israel labs
> targeted at the notebook market as a direct successor of P-M) after two
> failed CPU designs (P4, which was originally meant to hit the 5GHz border,
> but they heat disipation got to bad), and Itanic, oops, pardon me Itanium.

Haifa, Israel ... yes, surprising even them I think was when Intel went to use the Pentium-M as the new base (basically a PIII w/ SSE2). That's because Prescott used a LOT more energy than even Northwood (which I have, c. 2002) for only a small increase in speed. The found out that their original plans to ramp up to 10 Ghz weren't going to happen (since Prescott at 3.8 Ghz was ridiculously hot). So they went back to the more power-friendly Pentium-M. (I don't think Dell ever sold any IA64s to anybody, did they? That's where my P4 came from, and surely by 2002 IA64 was out there, so they could have ... if they wanted.)

> Core 2 was the first design after they decided they needed something new.

Core 1 never got utilized much, from what I could tell. I think it was just a stepping stone. And Core 2 was the first to support x86-64 (not counting previous Xeons).

> Nehalem is the third (if I understood correctly, Nehalem is designed by
> the former P4 team, which is, among others, why they brought hyper
> threading back)

Nehalem is supposed to be 30% faster than previous incarnations, even. Something about Intel's new "tick tock" strategy, new microarchitecture every other year. Of course, they're also including SSE4.2 now (instead of only SSSE3 or SSE4.1). My AMD64x2 laptop "only" (heh) supports SSE3, and yet I can't help but feel that all the SIMD crap is (almost) madness.

AMD really had a good foothold until Core 2 came about. Then all the whiners about Intel's monopoly happily jumped ship due to faster cpus. Gotta have the latest, greatest, I guess. Oh well. Not that I really blame them, just I find it hard to really consider AMD a "generation behind".

> > BTW, CWS says it was good for "number crunching" and "made x86 look like
> a > toy" but "had little driver support".
>
> It did have decent FPU (till Athlon64 came along), but it was relatively
> underclocked. Part of it was the glorious future that never came when it
> would be equally clocked.

Here's what I don't understand: AMD64 is good at number crunching? How so? Due to the 16 64-bit GPRs or improved FPU? The FPU proper (and MMX etc.) is often considered "deprecated", which surprises me. I understand that SSE is all the rage, but ... I dunno, people are weird. :-| And since all x86-64 supports SSE2, it should be used more and more. (But is it?) I'm not sure I believe x86-64 inherently speeds up everything as much as people claim.

> Note that honest comparisons were pretty hard, due to the fact that they
> gave the high-end Itanium multi MB caches, while the Xeon (P4 based)
> competition had to do with a fourth of that.

Same with AMD, they've always had smaller caches and (slightly) older fabs. Hence why Intel is moving to 32nm while AMD is "stuck" at 45nm with Phenom II. (I know, boo hoo.)

> > (And to be honest, it feels like SSE1/2/3/4/5 are just things for math
> nerds and/or multimedia freaks.
>
> Correct. e.g. the main Nehalem highlights are the on-board memory manager
> and better scalabity over more than 2 cores. People over focus on
> instructions

In other words, I'm not sure MMX, much less SSE2 etc., has been really utilized correctly so far. I don't blame the compilers (it's tough), but I kinda sorta almost agree with some: why did Intel make it so kludgy?

> > Encoding multimedia, compression, compiling, etc.
>
> Hardly. The CPU intensive part of the first two is too coarse grained, and
> the I/O too simple to benefit from kernel work.

Well, 7-Zip (and 4x4, etc.) are multithreaded, at least in part, and it does make a difference. But it's not horribly slow anyways, so it only matters in huge files anyways.

> Compiling is a different story. Yes it is faster with -j 4, but keep in
> mind that it partially also just earning back the scalability that was
> introduced (by having the entire process controlable with make, and the
> infinite header reparsing that that entitles) in the first place.

I'm not a huge, huge fan of make. Mostly because there are so many variants, and also because it's so kludgy and hard to make do exactly what you want. And it's very hard to read, too.

> FPC wins like 20-40% with -j 2. Florian is studying on bringing threading
> INTO the compiler, but I doubt that will bring that much.

It can't hurt to try. But most likely the only win will be separating file I/O with the cpu-intensive part (like 7-Zip does).

> Well on *nix it is the same. Everything but the main app goes on the 2nd
> core. Usually you don't have a guzzler like an antivirus, but you still
> get some speedup

Tell that to Intel: they said at one point to expect "thousands" of cores eventually.

> Gaming could benefit, but while they make a big production out of it to
> sell quad cores to the gamers, it is only just starting.

Well, I never understood computer gaming, esp. needing such a high-powered rig just for the "pleasure" of configing / installing / uninstalling / reinstalling / patching. I'd prefer a console (not that I game much anymore). Anyways, DirectX version, total RAM, and GPU probably affect a game more than cores these days.

> > So Core 2 should be lots faster at SSE2 than a P4.
>
> a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more
> than 100% (which is even more on a per clock basis)

256-bit bandwidth (vs. 128?), I think.

> > And DOS just prefers
> > DJGPP while Win32 users prefer MinGW with a very select few preferring
> > Cygwin (more restrictive license, big .DLL).
>
> I'm inbetween. I use cygwin the most, but for the programming I/we
> actually use mingw bins.

Cygwin seems to have a better app library. But it's a double-edged sword: no MSVCRT.DLL bugs but you have to lug around a huge CYGWIN1.DLL as well as adhere to the license (which is only annoying in that it's an extra hassle, more grunt work to do for no benefit). Besides, DJGPP can do all that and still run on (gasp) non-Windows! (And you're not stuck to 3.4.4 or 3.4.5 ... and I found the 4.x snapshots VERY buggy, at least for Vista.)

> > (Also, seems FPC prefers to compile its own stuff, e.g. WMemu is bigger
> > If so, I assume UPX-UCL is more useful than
> > stock semi-closed UPX-NRV.
>
> We discourage UPX. I don't see the point.

But FPC 2.2.2 includes UPX 3.03 in MAKEDOS.ZIP.

N.B. My .BAT is vaguely flawed anyways, as I just found out yesterday. I don't know why (makefile quirks?) but optimizations for the main compressor proper weren't being turned on. So I have to modify it (done) and upload it (not done). I think I'll make a UPX-UCL package for FreeDOS (esp. since they never did anything with my previous unofficial build). But yeah, feel free to ignore.

> And your DOS stable argument reminds me a bit about the Urban Legend (?)
> how MS got Windows NT4 (sp0) initial release C2 security certified

I meant stable API as in both old and new programs still run fine. It's not as much of a moving target.

> OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really
> stops, Dos is dead.

Well, maybe not. I mean, Turbo C++ 1.01 is "dead" but still used by FreeDOS many many years after-the-fact. So, somebody could still use it, it just wouldn't be updated. And actually you could always fork DJGPP (but call it something else) or use EMX/RSX or MOSS instead. (And there's always OpenWatcom.)

 

Complete thread:

Back to the forum
Board view  Mix view
15108 Postings in 1358 Threads, 246 registered users, 13 users online (0 registered, 13 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum