Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

Compatibility woes / deprecation (Miscellaneous)

posted by marcov(R), 22.02.2009, 22:11

> > Yes they did. Why do you think they lamed on with P4 so long?
> They must've had high expectations for it.

Sure, but at the time the 3GHz came out, they knew they weren't going to make the planned 5GHz, unless a miracle happened. Yet it is then still years till core2.

> > And why do
> > you think AMD originally came up with x86_64 and not inteL?
> I dunno, market pressure? Ingenuity? Boredom? ;-)

Intel already wanted to use the 64-bit argument as "advantage" for Itanium. (as driving force for x86(IA32)->IA64 migration). The last thing they needed was anything that detracted from that.

> > Correct, because Intel had a IA64 happy future. They even rebranded
> > x86 as IA32 to provide marketing continuety.
> I still highly doubt it. Do any home users of "normal" cpus have any
> IA64s? If not, then Intel never pushed it to the home market. Now, maybe
> x86-64 was AMD's competition to the supercomputer / server market, who
> knows.

The trick was that the IA64 scheme already failed in the server market before the workstation market came even into sight. And that market (say, the autocaders, solidwork etc) traditionally is the bridge between server and desktop use.

> > couldn't ramp up speed enough. (partially also because they had had to
> > ramp up P4 much faster than planned due to sudden AMD competition).
> A PIII was indeed faster per clock than P4, so only fast P4s really got
> better, and SSE2 made a difference (although required a code redesign due
> to quirks, ahem, stupid 16-byte alignment). This was due to the super long
> pipelines, right?

Yes. The first P4's were slower than the last (1.3 or 1.4 GHz?) P-III. However sometimes P4's could peak with specially crafted code that used SSE, usually in the photoshop/autocad ranges.

And in general the P4 depended more on "P4 optimized code" than P-III. Athlon was even less picky about how the code was optimized.

> Only with Athlon XP did they get full SSE1, and
> only AMD64 got SSE2 (although I'm still unsure how many people use such,
> esp. since non-Intel compilers don't really target it).

There are three uses:
- Real vector code, mostly in big commercial imageprocessing programs Photoshop, autocad etc.
- Basic primitives of the runtime (like move block of memory) can benefit of the wide registers of SSE(2). A very small number of very heavily used routines, but, contrary to most optimizations,it can be noticable to the user even. Think libgcc here, or similar routines in other compiler runtimes.
- (SSE2 only) SSE2 can be used as a replacement for the floating point engine (or, if you really want the last bit, as an addition). I however don't exactly know how this compares to the FPU with regards to IEEE compliancy of exceptions and precision. It might only be usable in cases where low precision is prefered.

> Haifa, Israel ... yes, surprising even them I think was when Intel went to
> use the Pentium-M as the new base (basically a PIII w/ SSE2). That's
> because Prescott used a LOT more energy than even Northwood (which I have,
> c. 2002) for only a small increase in speed. The found out that their
> original plans to ramp up to 10 Ghz weren't going to happen (since
> Prescott at 3.8 Ghz was ridiculously hot).

Correct. And the Pentium-M had been a tremendous success in the notebook market.

> (I don't think Dell ever sold any IA64s to
> anybody, did they?

Afaik yes, but Poweredge (and higher) only. Not desktops, and not to consumers.

> Core 1 never got utilized much, from what I could tell. I think it was
> just a stepping stone. And Core 2 was the first to support x86-64 (not
> counting previous Xeons).

Core 1 was the last of the Pentium-M's effectively. Used a lot in laptops and SFFs.

> > Nehalem is the third (if I understood correctly, Nehalem is designed by
> > the former P4 team, which is, among others, why they brought hyper
> > threading back)
> Nehalem is supposed to be 30% faster than previous incarnations, even.

The reports are varied. It depends if your tests scale up to 4 cores, and if they need memory bandwidth. Also, affordable Nehalems are fairly low clocked (the lowest 2.6 or so Nehalem being the price range of the recently released Core2 Duo 3.5Ghz)

> Something about Intel's new "tick tock" strategy, new microarchitecture
> every other year. Of course, they're also including SSE4.2 now (instead of
> only SSSE3 or SSE4.1). My AMD64x2 laptop "only" (heh) supports SSE3, and
> yet I can't help but feel that all the SIMD crap is (almost) madness.

It isn't, but it shouldn't be overrated. Rule of thumb, it is useless unless you specifically code for it. Except the above "libgcc" point, but that only works after a compiler/runtime is adapted for it.

> AMD really had a good foothold until Core 2 came about. Then all the
> whiners about Intel's monopoly happily jumped ship due to faster cpus.

(or they were, like me basically honest, and went with performance/$$$. This core2 is the first intel since my i386 SX 20, the rest was AMD or Cyrix)

> > underclocked. Part of it was the glorious future that never came when
> it
> > would be equally clocked.
> Here's what I don't understand: AMD64 is good at number crunching?

The architecture? Well, yes and no. SSE2 is now guaranteed (contrary to x86/i386), so if you are not really picky on precision. This made that a lot more compilers _default_ enable this.

However _Opteron_ was particularly good at it.

> so? Due to the 16 64-bit GPRs or improved FPU? The FPU proper (and MMX
> etc.) is often considered "deprecated", which surprises me. I understand
> that SSE is all the rage, but ... I dunno, people are weird. :-| And
> since all x86-64 supports SSE2, it should be used more and more. (But is
> it?)

I'm not certain about this either, specially if SSE2 can be used instead of FPU in all cases (exception, precision)

>I'm not sure I believe x86-64 inherently speeds up everything as much
> as people claim.

It isn't. On avg it is about equal or very, very minorly slower.

> > Correct. e.g. the main Nehalem highlights are the on-board memory
> manager
> > and better scalabity over more than 2 cores. People over focus on
> > instructions
> In other words, I'm not sure MMX, much less SSE2 etc., has been really
> utilized correctly so far. I don't blame the compilers (it's tough), but I
> kinda sorta almost agree with some: why did Intel make it so kludgy?

Because it was always only useful for a certain kind of apps. It would be counterproductive to dedicate a large part of the die to it.

Later when the use was somewhat established they went further with SSE2 and spend a separate execution unit, later more.

> > > Encoding multimedia, compression, compiling, etc.
> > > Hardly. The CPU intensive part of the first two is too coarse grained,
> and the I/O too simple to benefit from kernel work.
> Well, 7-Zip (and 4x4, etc.) are multithreaded, at least in part, and it
> does make a difference. But it's not horribly slow anyways, so it only
> matters in huge files anyways.

I don't spent my days zipping/unzipping, and huge compression jobs (like backup) run unattended anyway.

> I'm not a huge, huge fan of make. Mostly because there are so many
> variants, and also because it's so kludgy and hard to make do exactly what
> you want. And it's very hard to read, too.

Well, my remark is not as much that, but more the fact that the paralellism of make is so simplistic. It simply starts a train of paralel compiler instances, and the problem with that is that with the fast computers of nowadays, actually the start of the compiler is the most expensive part, not the actual compiling.

> > INTO the compiler, but I doubt that will bring that much.

> It can't hurt to try. But most likely the only win will be separating file
> I/O with the cpu-intensive part (like 7-Zip does).

I don't know what 7zip does, but I have some doubts that in the current build model, we can keep a multithread compiler fed with stuff to compile before it shuts down and goes to the next directory.

> > core. Usually you don't have a guzzler like an antivirus, but you still
> > get some speedup
> Tell that to Intel: they said at one point to expect "thousands" of cores
> eventually.

Yes, but they want to go in to GPU and General purpose GPU.

> > a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more
> > than 100% (which is even more on a per clock basis)
> 256-bit bandwidth (vs. 128?), I think.

Where do you get that? One still installs DDR (1,2,3) in pairs for optimal performance, and each is 64-bit. So afaik mem bandwidth is still 128bit. (unless you use several QPI/HT)

> hassle, more grunt work to do for no benefit). Besides, DJGPP can do all
> that and still run on (gasp) non-Windows! (And you're not stuck to 3.4.4
> or 3.4.5 ... and I found the 4.x snapshots VERY buggy, at least for
> Vista.)

Well, for programming I only use a few tools like GDB and make. Compiler+binutils are provided by FPC on win32/64.

> > We discourage UPX. I don't see the point.
> But FPC 2.2.2 includes UPX 3.03 in MAKEDOS.ZIP.

True, but that doesn't mean you should routinely use it.

> > OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really
> > stops, Dos is dead.
> Well, maybe not. I mean, Turbo C++ 1.01 is "dead" but still used by
> FreeDOS many many years after-the-fact.

Is one user left really enough to declare the target non-dead? Is C=64 not dead because there are still people working with it? IMHO it is the same thing as with cars. At a certain point they are oldtimers, and while still cherished, people don't use them every day anymore.


Complete thread:

Back to the forum
Board view  Mix view
15275 Postings in 1373 Threads, 253 registered users, 17 users online (0 registered, 17 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum