Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

Compatibility woes / deprecation (Miscellaneous)

posted by marcov(R), 19.02.2009, 10:47

> I don't think Intel ever intended it to be
> mainstream for home users, esp. since they sold lame-o P4s from 2000 until
> 2006 (Core) although Xeons supposedly had AMD's x86-64 in 2004 or so.

Yes they did. Why do you think they lamed on with P4 so long? And why do you think AMD originally came up with x86_64 and not inteL? Correct, because Intel had a IA64 happy future. They even rebranded x86 as IA32 to provide marketing continuety.

The trouble with Itanium was manyfold though. It required huge investments in compilers (and commercial tool vendors were already near bankrupt), the logic the VLIW design saved was dwarfed by the emergence of multi-megabyte cache and evaporated. Moreover, Intel was plagued with problems, and couldn't ramp up speed enough. (partially also because they had had to ramp up P4 much faster than planned due to sudden AMD competition).

I worked in an university computational center from 2000-2003. Intel is a large beast with departments competing, but HP was pushing itanium pretty badly.

> And they've hyped the Core 2 to death.

Core-one was their rescue (out of fairly recently founded Israel labs targeted at the notebook market as a direct successor of P-M) after two failed CPU designs (P4, which was originally meant to hit the 5GHz border, but they heat disipation got to bad), and Itanic, oops, pardon me Itanium.

Core 2 was the first design after they decided they needed something new. Nehalem is the third (if I understood correctly, Nehalem is designed by the former P4 team, which is, among others, why they brought hyper threading back)

> BTW, CWS says it was good for "number crunching" and "made x86 look like a > toy" but "had little driver support".

It did have decent FPU (till Athlon64 came along), but it was relatively underclocked. Part of it was the glorious future that never came when it would be equally clocked.

Note that honest comparisons were pretty hard, due to the fact that they gave the high-end Itanium multi MB caches, while the Xeon (P4 based) competition had to do with a fourth of that.

> (And to be honest, it feels like SSE1/2/3/4/5 are just things for math > nerds and/or multimedia freaks.

Correct. e.g. the main Nehalem highlights are the on-board memory manager and better scalabity over more than 2 cores. People over focus on instructions, same as when they are talking about development systems they overfocus on the language.

> people wonder why Intel didn't extend the normal
> GPR size to 64-bits long
> before AMD.)

> PAE: PPro or newer (64 GB max.)
> PSE-36: PIII (64 GB max., "simpler alternative"??)
> AMD64: 40-bit address space (1 TB max.)

Physical, but they had 48-bit virtual, which was sign extended to 64-bit or so.

> AMD64 / Barcelona: 48-bit (256 TB max.)

For the rest it is more or less correct, but note that often only the server variants (XEON) actually had PAE enabled of the 32-bit ones. I don't think you could buy a consumer PAE cpu till x86_64 (which has it part of the standard).

I don't know much about PSE, never used it, and can't remember it being mentioned when I was at the uni computational center either.

> Encoding multimedia, compression, compiling, etc.

Hardly. The CPU intensive part of the first two is too coarse grained, and the I/O too simple to benefit from kernel work.

Compiling is a different story. Yes it is faster with -j 4, but keep in mind that it partially also just earning back the scalability that was introduced (by having the entire process controlable with make, and the infinite header reparsing that that entitles) in the first place.

FPC wins like 20-40% with -j 2. Florian is studying on bringing threading INTO the compiler, but I doubt that will bring that much.

> Not much else really
> comes to mind. Gaming?? (Well, okay, running antivirus/anti-spyware in
> background is much much nicer on a dual core than single core. But that's
> Windows-specific, i.e. I doubt your FreeBSD 64-bits has that issue, heh.)

Well on *nix it is the same. Everything but the main app goes on the 2nd core. Usually you don't have a guzzler like an antivirus, but you still get some speedup (and at least as important: responsivity) from the 2nd core. But that is 50% utilization at best, so adding more cores is useless.

Gaming could benefit, but while they make a big production out of it to sell quad cores to the gamers, it is only just starting. Big chance that when you get games that really utilize more than 2 cores significantly, the 4core rig you buy now is already outdated for the then most demanding games.

> Well, are we comparing SSE5 to non-SIMD or to SSE2 or what? They already
> extended the SSE bandwidth in newer chips

The overall improvement of SSE5 vs same app without SSE5 instructions on the same machine.

> So Core 2 should be lots faster at SSE2 than a P4.

I happen to know that, because I actually have a SSE2 routine in production at work :-) Euh, what was it again, a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more than 100% (which is even more on a per clock basis)

> N.B. Most of this is just what I've read. I'm really not that
> well-informed or intelligent or useful. Just a friendly caveat so you
> don't think I'm being pretentious. :-|

Reading is the best way. I get most of the good info from the German C'T publication, occasionally throwing in a Dr Dobbs. Also I like arstechnica and the register (though the latter is as much entertainment as info)

I started following CPU use because of FPC, but if you do it ten years you learn a lot. I'm not that big with the practical applications either.

> And DOS just prefers
> DJGPP while Win32 users prefer MinGW with a very select few preferring
> Cygwin (more restrictive license, big .DLL).

I'm inbetween. I use cygwin the most, but for the programming I/we actually use mingw bins.

> (Also, seems FPC prefers to compile its own stuff, e.g. WMemu is bigger
> than default old compile.

Afaik, FPC just randomly picks stuff, and is a bit conservative and sticks with the stable stuff if it works. There is no real policy there. We would need more release builds (testing releases) to pull that off, and its currently hard enough to get to a one release every 6 months schedule.

> If so, I assume UPX-UCL is more useful than
> stock semi-closed UPX-NRV.

We discourage UPX. I don't see the point.

> Uh, not exactly. I mean, as long as FreeDOS and OS/2 and (older) Windows
> still use it, it's useful.

True, and there is nothing wrong with using it. But you can't blame other people for losing investments you knew it was a goner long term all along.

> I consider DOS more stable than
> Linux (2.2? 2.4? 2.6?) or *BSD (have they ever finished Linux 2.6 emulation
> or still stuck at 2.4?).

I've only used the Linux emulation for _a_ adobe acrobat reader (don't care that much which).

And your DOS stable argument reminds me a bit about the Urban Legend (?) how MS got Windows NT4 (sp0) initial release C2 security certified: By changing the testing circumstances so that the NT machine was built into a brick wall with only the power cord going into it. Then it was 100% secure.

IOW you can't judge stability by comparing unequal amount of services.

> > Oldest rule of marketplaces:
> > Costs money/effort etc, and not enough demand.
>
> Then why even bother? Why draw up a standard and not use it?

Not everything always goes as planned. I'm sure they planned to use it.

> That's just weird. "Good" doesn't really depreciate over time.

Actually I think it changes yes.

> Is it really better to remake the world (or maybe let the pre-existing
> world work with you)?

IMHO the whole static world concept is artificial. There never was that stable world that needs to be remade in the first place. It is part retrospective idealization, part over-simplification.

> Vista is already two years old. Only three more to go before obsoletion!
> :-P

Vista will be obsolete sooner than XP-pro. (which iirc is security supported till 2014)

> We can have multitasking but not multi-boot or multi-development?

Sure we can. But apparantly Dos can't sustain itself. That is the problem, not the others forbidding it.

> They don't even try, even when it's easy, even when it IS beneficial.
> C'mon, you're basically saying that nothing useful ever has or ever could
> come out of DOS.

If you say:

... could not come out of Dos anymore ...

then yes. I was teasing you about setting up archiving-communities etc, but honestly, I think it is already too late for that, since such communities take quite some time to develop.

OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really stops, Dos is dead.

> Besides, not everything is about net profit / gain,
> sometimes it's just about making something useful for someone else. (I
> mean, NetBSD running on a toaster isn't majorly practical or useful to me.
> But it's still cool.)

True, but the NetBSD people do that themselves, which is exactly the problem with Dos. No hardened community that bands together, and actively works. Just a few forums full of people having delusions about Dos old grandeur, which are fragmented between people using old commercial stuff and people working on OSS platform.

> So you really only see DOS as 16-bits non-*nix single-tasking real-mode,
> and nothing else??

No. I wouldn't have worked on 32-bit FPC Dos (till 2000) if I thought that.

> Or more realistically as a 16-/32-bit hacked real/pmode
> hybrid

At the moment I don't really have any use case. I'm here mostly because I like fullscreen textmode apps, and Dos is the platform where those were dominant.

 

Complete thread:

Back to the forum
Board view  Mix view
15108 Postings in 1358 Threads, 246 registered users, 13 users online (0 registered, 13 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum