Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to index page
Thread view  Board view
mr(R)

26.07.2008, 13:39
 

HX question about link.exe (DOSX)

My Visual Studio version is 2005. The included link.exe is version 8. For compiling the DJGPP HX development example I need to get link.exe to work under HX and that's where I am struggling.

Under FreeDOS did copy the single link.exe to C:\programs\link\link.exe, started hdpmi32 and pestub link.exe. Step by step I wanted to start link.exe over and over again to read the error messages which file is still missing and copying the needed file also into C:\programs\link\, but I think this manual way is very time consuming and circumstantially, is it?

After copying msvcr80.dll and msvcrt.dll to C:\programs\link\ I am getting an error now.
dpmild32: import not found: MoveFileExW
dpmild32: file KERNEL32.dll
dpmild32: C:\FDOS\bin\dadvapi.dll: cannot resolve imports

Currently I have no idea how to get link.exe to work under DOS. On http://www.unet.univie.ac.at/~a0503736/php/drdoswiki/index.php?n=Main.HX-DOS-lists is also only listed link.exe version 5.12, 7.00 and 7.10 as compatible, maybe it's not updated or maybe link version 8 is no longer compatible.

Please tell me how to resolve this dll decencies a bit more elegant or which version of link.exe is only known to work.

Japheth(R)

Homepage

Germany (South),
26.07.2008, 16:21

@ mr
 

HX question about link.exe

> My Visual Studio version is 2005. The included link.exe is version 8. For
> compiling the DJGPP HX development example I need to get link.exe to work
> under HX and that's where I am struggling.
>
> Under FreeDOS did copy the single link.exe to C:\programs\link\link.exe,
> started hdpmi32 and pestub link.exe. Step by step I wanted to start
> link.exe over and over again to read the error messages which file is
> still missing and copying the needed file also into C:\programs\link\, but
> I think this manual way is very time consuming and circumstantially, is
> it?
>
> After copying msvcr80.dll and msvcrt.dll to C:\programs\link\ I am getting
> an error now.
> dpmild32: import not found: MoveFileExW
> dpmild32: file KERNEL32.dll
> dpmild32: C:\FDOS\bin\dadvapi.dll: cannot resolve imports
>
> Currently I have no idea how to get link.exe to work under DOS. On
> http://www.unet.univie.ac.at/~a0503736/php/drdoswiki/index.php?n=Main.HX-DOS-lists
> is also only listed link.exe version 5.12, 7.00 and 7.10 as compatible,
> maybe it's not updated or maybe link version 8 is no longer compatible.
>
> Please tell me how to resolve this dll decencies a bit more elegant or
> which version of link.exe is only known to work.

All MS link binaries <= MSVC Toolkit 2003 (v7.10) should work with hx.

One way to get a valid MS link binary (v5.12) is to download the Masm32 package. A compatible, free MS link clone is PoLink, included in PellesC. Both are verified to work in DOS with HXRT.

---
MS-DOS forever!

mr(R)

26.07.2008, 19:25

@ Japheth
 

HX question about link.exe

> A compatible, free MS link clone is PoLink, included in PellesC.
> Both are verified to work in DOS with HXRT.

The bad news:
Unfortunately there is a exception.
http://img182.imageshack.us/img182/5528/34503279rc7.png

> One way to get a valid MS link binary (v5.12) is to download the Masm32
> package.

pestub link.exe brings an error. http://img301.imageshack.us/img301/6831/59740341ds4.png

And the good news:
After copying msvcrt.dll and msvcr80.dll link.exe seams to work with dpmil32 link.exe.

Japheth(R)

Homepage

Germany (South),
26.07.2008, 19:53

@ mr
 

HX question about link.exe

> > A compatible, free MS link clone is PoLink, included in PellesC.
> > Both are verified to work in DOS with HXRT.
>
> The bad news:
> Unfortunately there is a exception.
> http://img182.imageshack.us/img182/5528/34503279rc7.png

I think this is because the DPMILD32 binary you're using is a bit old.

> > One way to get a valid MS link binary (v5.12) is to download the Masm32
> > package.
>
> pestub link.exe brings an error.
> http://img301.imageshack.us/img301/6831/59740341ds4.png

Don't take that too seriously. It's probably a bug in PESTUB, DPMILD32 has problems with RVAs < 0x1000, but an RVA of exactly 0x1000 is ok.

> And the good news:
> After copying msvcrt.dll and msvcr80.dll link.exe seams to work with
> dpmil32 link.exe.

Good.

---
MS-DOS forever!

mr(R)

26.07.2008, 20:19

@ Japheth
 

HX question about link.exe

> > > A compatible, free MS link clone is PoLink, included in PellesC.
> > > Both are verified to work in DOS with HXRT.
> >
> > The bad news:
> > Unfortunately there is a exception.
> > http://img182.imageshack.us/img182/5528/34503279rc7.png
>
> I think this is because the DPMILD32 binary you're using is a bit old.

I wondered if I have a time machine or so... Because just to ensure I didn't completely mess up I downloaded yesterday again from your server the latest stable.

The old version was used because C:\FDOS\BIN is in path variable before C:\drivers\hx\bin...

Now is HX in path before FDOS\BIN and dpmild32 polink.exe is working.

Gnah, this was a stupid mistake and hard to find out for me.

mr(R)

26.07.2008, 20:35

@ Japheth
 

HX question about link.exe

> Both are verified to work in DOS with HXRT.

True, also working for me.

> A compatible, free MS link clone is PoLink, included in PellesC.

It seams to be no jump in replacement.

Finally I could compile the HX DJGPP example with gcc.bat using the link.exe from Masm32 package. With polink I got a bunch of already defined symbols erros. Nevermind as long link.exe from ms is working but I just wanted to report it in case it's interesting.

mr(R)

26.07.2008, 20:47

@ mr
 

HX question about link.exe

Btw, I am kinda fascinated of HX. :) You can compile under DOS with DJGPP... Run your program under DOS and later you can use the same exe on XP and even on Vista (where DOS support has been removed)!

cm(R)

Homepage E-mail

Düsseldorf, Germany,
26.07.2008, 21:24

@ mr
 

HX question about link.exe

> Btw, I am kinda fascinated of HX. :) You can compile under DOS with
> DJGPP... Run your program under DOS and later you can use the same exe on
> XP and even on Vista (where DOS support has been removed)!

That's not the complete truth, 32-bit Vista still has the NTVDM. However it has more restrictions, and of course all sort of DOS support is removed from 64-bit Vista and XP.

HX is indeed a very useful software. Used the official Win32 version of 7-Zip for some months now under DOS.

---
l

mr(R)

26.07.2008, 22:23

@ cm
 

HX question about link.exe

> > Btw, I am kinda fascinated of HX. :) You can compile under DOS with
> > DJGPP... Run your program under DOS and later you can use the same exe
> on
> > XP and even on Vista (where DOS support has been removed)!
>
> That's not the complete truth, 32-bit Vista still has the NTVDM. However
> it has more restrictions, and of course all sort of DOS support is removed
> from 64-bit Vista and XP.

Under 64 bit XP/Vista, does the HX binarys still run?

cm(R)

Homepage E-mail

Düsseldorf, Germany,
26.07.2008, 23:18

@ mr
 

HX question about link.exe

> > That's not the complete truth, 32-bit Vista still has the NTVDM.
> However
> > it has more restrictions, and of course all sort of DOS support is
> removed
> > from 64-bit Vista and XP.
>
> Under 64 bit XP/Vista, does the HX binarys still run?

Depends on compilation. Programs that are in PE (Win32) format (and don't use DPMI features) should run fine, programs written for DPMI won't. You've to compile the program into a normal PE (Win32) binary and then make sure it's compatible with HX if you want to provide one file for DOS and Windows 64 bit.

---
l

Japheth(R)

Homepage

Germany (South),
28.07.2008, 08:16

@ mr
 

HX question about link.exe

> Btw, I am kinda fascinated of HX. :) You can compile under DOS with
> DJGPP... Run your program under DOS and later you can use the same exe on
> XP and even on Vista (where DOS support has been removed)!

Thanks, but I'm not sure if I understand you correctly. Because a normal DJGPP DOS binary is also supposed to run both in DOS and in Windows NTVDM. The purpose of the DJGPP sample supplied with HX is just to change the memory model from "flat" to "zero-based flat". The advantage then is that a function in a PE dll can be called without tricks.

---
MS-DOS forever!

marcov(R)

28.07.2008, 22:48

@ Japheth
 

HX question about link.exe

> > Btw, I am kinda fascinated of HX. :) You can compile under DOS with
> > DJGPP... Run your program under DOS and later you can use the same exe
> on
> > XP and even on Vista (where DOS support has been removed)!
>
> Thanks, but I'm not sure if I understand you correctly. Because a normal
> DJGPP DOS binary is also supposed to run both in DOS and in Windows NTVDM.

DJGPP binaries start 16-bit, and then go PM with DPMI to keep contact with the OS?

However in 64-bit, 16-bit code can't be executed, even not short loader code.

RayeR(R)

Homepage

CZ,
28.07.2008, 22:50

@ marcov
 

HX question about link.exe

> However in 64-bit, 16-bit code can't be executed, even not short loader
> code.

I think so, AFAIK NTVDM was completly removed in 64bit wins, isn't?

---
DOS gives me freedom to unlimited HW access.

Rugxulo(R)

Homepage

Usono,
29.07.2008, 00:11

@ RayeR
 

HX question about link.exe

> > However in 64-bit, 16-bit code can't be executed, even not short loader
> > code.
>
> I think so, AFAIK NTVDM was completly removed in 64bit wins, isn't?

There's no V86 mode in 64-bits. Long mode encompasses a compatibility submode and a 64-bit submode (I think). So it's not that NTVDM was removed, it just doesn't work on 64-bits due to the processor. (However, you'd think MS would emulate it a la DOSEMU or DOSBox, but no. If it doesn't make them money, I guess they don't care.)

64-bits is still kinda new territory (although a lot more popular than previously), so we don't really need it (unless you want > 4 GB of RAM). 64-bit stuff takes more RAM (and the address space is technically only 48-bits for now, to be increased eventually).

marco, you run a 64-bit OS, right? What are your experiences? Faster? Slower? Good 32-bit compatibility? Uses more RAM?

marcov(R)

29.07.2008, 09:33
(edited by marcov, 29.07.2008, 17:10)

@ Rugxulo
 

HX question about link.exe

> marco, you run a 64-bit OS, right? What are your experiences?
> Faster? Slower? Good 32-bit compatibility? Uses more RAM?

(I run Linux and Windows Vista under 64-bit now. Also have a run 64-bit FreeBSD, but managed make that unbootable when I let Vista repair a disk. and haven't had time to look into it. The machine is a Core2 6600 with 4GB, the main reason for the memory is because of Xen VMs under Linux.)

Total RAM use under Windows is higher, but not twice. This is logical, since most are buffers anyway. Moreover, both systems can run also 32-bit binaries, and e.g. 64-bit bootcds often only have a 64-bit kernel, but a complete 32-bit userland. So does e.g. Mac OS X (which installs a 64-bit kernel on x86 during setup if it sees that you have enough ram iirc)

If you see programs in the process manager, they are nearly twice the size. Say 170-180% of the 32-bit eq. Similarly on disc.

In general, performance difference is not really notable on a Core2. For good or worse. Before Vista sp1, I had a feeling that Vista 64-bit was slower, but the servicepack seem to have erased that pretty much. Since I was up to date with drivers, it felt like it was really the sp that did it).

Since the memory difference (3.25GB visible under XP, 4GB on 64-bit Vista) isn't that big, you can't really notice the difference, unless you really get to the edges.

I read up a lot on 64-bits programming and the summary is that rule of thumb, general purpose code is slightly slower. But this is only a measurable difference, not a noticable one.

Roughly these are the factors that matter. Some are in favour of 64-bit, some against. Note that I write this up from memory, so there might be little mistakes:

1 pointer is bigger (integer remains 32-bit) -> average data size bigger
1a -> worse cache utilization,
1b -> more data to shovel around if you segmove() something. I think this is more significant than the cache for general purpose programs (a few programs that random access read enormous trees excluded). If you don't do random access, but work on e.g. streams, the memory bandwidth is eventually always the limit, despite cache size.

2 x86_64 has more registers. Some functions that previously had to spill registers to the stack now don't have to. However the register usage is slightly different due to more use of register parameters calling conventions, so this is not always a straightforward to predict. Afaik, all OSes use the same calling convention now btw. A lot less cdecl vs stdcall nonsense.

3 x86_64 has 64-bit registers. This mainly speeds up recoded (!) special purpose code as codecs, compression and encryption that didn't use SIMD or float. (since that was already 64-bit a time on x86). The occasional 64-bit integer used in the code is of course also sped up, but this is even rarer than recoding the "special" code.

4 x86_64 can move (R)IP to a register. This speeds up PIC somewhat, though IIRC the Athlon and Core archs already recognized and optimized the x86 construct.

5 The minimum instruction set is raised. This is theoretic (since you can generate code for e.g. SSE2 too on x86), but in practice it is often the biggest difference. SSE2 is guaranteed, and so are P6 conditional moves. The variation in CPU capabilities on x86_64 is also much lower than on x86, at least for now.

6 On Windows, the x87 is not documented to work on 64-bit XP due to a MS documentation glitch. Registers are not saved properly during a context switch. This lowers theoretical maximal performance since you can't use an unit.

Rugxulo(R)

Homepage

Usono,
29.07.2008, 20:42

@ marcov
 

HX question about link.exe

> (I run Linux and Windows Vista under 64-bit now. Also have a run 64-bit
> FreeBSD, but managed make that unbootable when I let Vista repair a disk.

Why are MS OSes always really really bad cooperating with others?? It's beyond embarrassing since you would hope they would fix that by now!

> and haven't had time to look into it. The machine is a Core2 6600 with
> 4GB, the main reason for the memory is because of Xen VMs under Linux.)

Isn't Xen not in the kernel tree unlike LVM, which is partially based on QEMU? (Or something.)

> Total RAM use under Windows is higher, but not twice. This is logical,
> since most are buffers anyway.

You mean aggressive caching?

> Moreover, both systems can run also 32-bit
> binaries, and e.g. 64-bit bootcds often only have a 64-bit kernel, but a
> complete 32-bit userland.

Really? I would assume they'd recompile everything (although that may not be a good idea).

> If you see programs in the process manager, they are nearly twice the
> size. Say 170-180% of the 32-bit eq. Similarly on disc.
>
> In general, performance difference is not really notable on a Core2.
>
> Since the memory difference (3.25GB visible under XP, 4GB on 64-bit Vista)
> isn't that big, you can't really notice the difference, unless you really
> get to the edges.

Sounds almost not even worth it. :-/

What I heard was that 64-bit was 10% faster, but a lot less drivers work.

> I read up a lot on 64-bits programming and the summary is that rule of
> thumb, general purpose code is slightly slower. But this is only a
> measurable difference, not a noticable one.

Maybe that's an implementation issue that will be fixed in later models??

> 2 x86_64 has more registers. Some functions that previously had to spill
> registers to the stack now don't have to. However the register usage is
> slightly different due to more use of register parameters calling
> conventions, so this is not always a straightforward to predict. Afaik,
> all OSes use the same calling convention now btw. A lot less cdecl vs
> stdcall nonsense.

GCC supports x86-64 (and before that various other 64-bit chips), so I assume it's not too bad at handling this.

> 5 The minimum instruction set is raised. This is theoretic (since you can
> generate code for e.g. SSE2 too on x86), but in practice it is often the
> biggest difference. SSE2 is guaranteed, and so are P6 conditional moves.
> The variation in CPU capabilities on x86_64 is also much lower than on
> x86, at least for now.

There doesn't seem to be much added to IA32 chips since CMOVxx besides SIMD stuff. So, basically, it seems you only get very very minor performance differences without targeting one of those instruction sets specifically (at least with GCC). The most popular seem to be MMX and SSE2 (not 3dnow! or SSE1, less need for doubles than scalars??).

> 6 On Windows, the x87 is not documented to work on 64-bit XP due to a MS
> documentation glitch. Registers are not saved properly during a context
> switch. This lowers theoretical maximal performance since you can't use an
> unit.

Some people say that the FPU and MMX are both deprecated in favor of SSE2. Maybe MS feels the same way, I dunno. And don't forget that SSE4.1 and SSE4a are already on the market (with AVX and SSE5 in the planning stages). However, a lot of legacy (32-bit, too) code uses the FPU, but for whatever reason, computers are moving too fast these days to (commercially) worry too much about legacy (although I think it's more of an issue with MS than not).

Microsoft basically started over from scratch with the XBox360 because it uses a PowerPC-ish cpu (tri-core 3.2 Ghz) unlike the previous Intel PIII Celeron 733 Mhz. I felt that was a bad idea (even if they did somewhat emulate some older games, though not all), especially since they very quickly dropped all XBox 1 efforts to focus on the new machine. It seems x86-64 (and/or future Windows) are almost heading this way: drop everything "old" in favor of new. It's like starting from scratch, so you basically have no library of programs, no userbase, so what's the point?? As much as "legacy" gets a bad name, what's so wrong with it? If it ain't broke, don't break it! I mean, who would use Windows if none of their favorite programs ran? Sure, people write for it because it has a famous name, which means market share and more potential money. But people (well, me at least) don't want to re-buy everything every few years, even if some marketing gurus think that's a good idea. Emulation is fine if it works (which sometimes it doesn't), plus it's much much slower.

P.S. QEMU loading latest XUbuntu x86-64 was taking forever (15+ minutes) and still didn't get past the moving bar screen, so I had to kill it. So at least at the moment that kind of emulation is not ideal.

marcov(R)

29.07.2008, 22:15

@ Rugxulo
 

HX question about link.exe

> Why are MS OSes always really really bad cooperating with others?? It's
> beyond embarrassing since you would hope they would fix that by now!

While agreed in general, (and even in this case), the root cause is a bit different. I suspect one of the OSes has a different view of harddisk layout than others, resulting in difficulties to make windows bootable again after a different OS install and vice versa.

> Isn't Xen not in the kernel tree unlike LVM, which is partially based on
> QEMU? (Or something.)

There is a kernel dimension. The idea was to speed up development for other platforms. Emulators are quite slow, so I decided to go with hypervisors. It worked fine for a while, but it broke with Fedora 7, and never was fixed since.

> > Total RAM use under Windows is higher, but not twice. This is logical,
> > since most are buffers anyway.
>
> You mean aggressive caching?

Any caching. IOW dynamically allocated memory occupied that isn't directly wired to processes.

(64-bit kernel/32bit userland)
> Really? I would assume they'd recompile everything (although that may not
> be a good idea).

There is not much difference between running in 16-bit programs on a 32-bit OS and 32-bit in a 64-bit process.

> > you can't really notice the difference, unless you really
> > get to the edges.
>
> Sounds almost not even worth it. :-/

Under Linux less even, since Linux supports PAE and 32-bit Linux has access to the full 4GB. On Windows this feature is reserved for the server SKUs.

> What I heard was that 64-bit was 10% faster, but a lot less drivers work.

Not my experience, though I can imagine it could be true for something that is really sensitive to the changes (like e.g. Photoshop).

> > I read up a lot on 64-bits programming and the summary is that rule of
> > thumb, general purpose code is slightly slower. But this is only a
> > measurable difference, not a noticable one.
>
> Maybe that's an implementation issue that will be fixed in later models??

Yes and no. It is simply the average larger data size. So unless you have 32-bit bins or some form of a NEAR model.

> > conventions, so this is not always a straightforward to predict. Afaik,
> > all OSes use the same calling convention now btw. A lot less cdecl vs
> > stdcall nonsense.
>
> GCC supports x86-64 (and before that various other 64-bit chips), so I
> assume it's not too bad at handling this.

There is nothing to handle. As said, the calling conventions are pretty universal for all x86_64 supporting OSes and compilers, and the rest is basic register allocation. x86-64 is quite risc'y in this regard. (and then I mostly mean orthogonality of instruction set, and more regs than x86)

Btw, in general GCC is worse in CPU specific optimizations than other compilers.

> > The variation in CPU capabilities on x86_64 is also much lower than on
> > x86, at least for now.
>
> There doesn't seem to be much added to IA32 chips since CMOVxx besides
> SIMD stuff.

Well, SSE2 is nearly a complete floating point unit too. Faster, but less precise.

> So, basically, it seems you only get very very minor
> performance differences without targeting one of those instruction sets
> specifically (at least with GCC). The most popular seem to be MMX and SSE2
> (not 3dnow! or SSE1, less need for doubles than scalars??).

It's a misconception that the SIMD sets only matter for vectorizable purposes.

The main bottle necks of the avg application are the heapmanager and the memory move function. And in the latter, it helps a lot if you can use SIMD instructions which move with larger granularity. x86 was always a bit ackward in this regard, compared to e.g. an 68040 that could move 16 bytes in one instruction in the early nineties already.

See e.g. the Fastcode project.

> > Registers are not saved properly during a context
> > switch. This lowers theoretical maximal performance since you can't use
> an
> > unit.

> Some people say that the FPU and MMX are both deprecated in favor of SSE2.

Afaik yes.

> Maybe MS feels the same way, I dunno. And don't forget that SSE4.1 and
> SSE4a are already on the market (with AVX and SSE5 in the planning
> stages). However, a lot of legacy (32-bit, too) code uses the FPU, but for
> whatever reason, computers are moving too fast these days to (commercially)
> worry too much about legacy (although I think it's more of an issue with MS
> than not).

Yes, but you can't introduce a whole new architecture every two years. Till now it happened twice. The i386 and the x86_64. All x86_64 machines have SSE2.

> Microsoft basically started over from scratch with the XBox360 because it
> uses a PowerPC-ish cpu (tri-core 3.2 Ghz) unlike the previous Intel PIII
> Celeron 733 Mhz.

You can always much freeer in chosing an embedded CPU. While there might be some legacy concerns, they are generally less.

Rugxulo(R)

Homepage

Usono,
30.07.2008, 00:54

@ marcov
 

HX question about link.exe

> > Why are MS OSes always really really bad cooperating with others??w!
>
> While agreed in general, (and even in this case), the root cause is a bit
> different. I suspect one of the OSes has a different view of harddisk

I guess it just assumes that Windows was hosed so that they won't get tons of support calls saying, "I can't boot anymore!" (Better safe than sorry?)

> > Isn't Xen not in the kernel tree unlike LVM
>
> Emulators are quite slow, so I decided to go with
> hypervisors. It worked fine for a while, but it broke with Fedora 7, and
> never was fixed since.

I've never tried Fedora, but from what I've heard (re: 9) is that it has lots of new, cutting edge stuff that is nice except that it's somewhat unstable / less stable than you'd like. Some people adore it, some complain about it (and Ubuntu too) in that regard.

> There is not much difference between running in 16-bit programs on a
> 32-bit OS and 32-bit in a 64-bit process.

I would have to run DOSEMU x86-64 to really see how good / bad performance is. So far, I haven't (since I'm too dumb to compile it myself; too bad no distros come with it). If anybody here ever tries it, I'm interested in your results (although I have no intention of running x86-64 full-time).

> > Sounds almost not even worth it. :-/
>
> Under Linux less even, since Linux supports PAE and 32-bit Linux has
> access to the full 4GB.

FreeBSD supposedly supports it, but kernel + modules must all be compiled with that in mind (according to Wikipedia). I don't think most home users have more than 4 GB (at most) just yet.

> > > I read up a lot on 64-bits programming and the summary is that rule
> of
> > > thumb, general purpose code is slightly slower.
> >
> > Maybe that's an implementation issue that will be fixed in later
> models??
>
> Yes and no. It is simply the average larger data size. So unless you have
> 32-bit bins or some form of a NEAR model.

So what is the advantage of 64-bit then?? If more registers doesn't equal more speed but uses more memory, then it's crap, just a novelty, not worth bothering with.

> Btw, in general GCC is worse in CPU specific optimizations than other
> compilers.

What other compilers? I'll admit GCC is pretty darn good but could indeed be improved (in theory, I dunno how). Intel's doesn't count because it's expensive. Okay, yes, MSVC is supposedly better now, but it's only got one platform + architecture to target, so it's probably easier / simpler.

> The main bottle necks of the avg application are the heapmanager and the
> memory move function. And in the latter, it helps a lot if you can use
> SIMD instructions which move with larger granularity. x86 was always a bit
> ackward in this regard, compared to e.g. an 68040 that could move 16 bytes
> in one instruction in the early nineties already.

Did you ever have one of those? I didn't, but I think the Atari Falcon used one (which IIRC was at one time development platform for their Jaguar console).

According to Darek Mihocka, he was able to emulate an Atari ST ("Gemulator") at full speed on a 486 in 1992. (This is also the same guy who sped up BOCHS recently.) Heck, he got BOCHS running on his PS3.

> Yes, but you can't introduce a whole new architecture every two years.
> Till now it happened twice. The i386 and the x86_64. All x86_64 machines
> have SSE2.

They sell these machines because they are either more powerful and/or faster than their predecessors. "Run your favorite apps in half the time!" or "30% speedups!" In reality, they do run better/cooler/quieter, but it's more unnecessary upgrades. I personally prefer software optimization over hardware upgrades because the former benefits everyone while the latter only benefits you (and potentially wastes money). The idea that our computers are 1000x stronger than 10 years ago astonishes me because we aren't able to do 1000x more stuff (only maybe five or ten, if even).

There should probably be a). media PCs, b). gaming PCs, c). legacy / work PCs, and d). bleeding edge / experimental PCs. (I mean, I doubt I'll ever use / need Direct X 10 because I'm not a PC gamer. And x86-64 is of no use to me now if the only tangible advantage is access to > 4 GB RAM.)

> > Microsoft basically started over from scratch with the XBox360 because
> it
> > uses a PowerPC-ish cpu (tri-core 3.2 Ghz) unlike the previous Intel
> PIII
> > Celeron 733 Mhz.
>
> You can always much freer in chosing an embedded CPU. While there might
> be some legacy concerns, they are generally less.

I don't know about your experiences or what languages / cpus you've programmed for, but what is your favorite architecture (if any) and why (price and power consumption not considered)? In other words, is 68000 better than i386 in your eyes?

Steve(R)

Homepage E-mail

US,
30.07.2008, 05:22
(edited by Steve, 30.07.2008, 09:03)

@ Rugxulo
 

HX question about link.exe

> > an 68040 that could move 16 bytes
> > in one instruction in the early nineties already.
>
> Did you ever have one of those? I didn't, but I think the Atari Falcon
> used one (which IIRC was at one time development platform for their Jaguar
> console).

Tha last Amigas, the 3000 and 4000, and Steve Jobs's NeXT used the 68040. Excellent chip, performance about equal to the i80486, but designed to run Unix on high end workstations.

marcov(R)

30.07.2008, 12:53

@ Steve
 

HX question about link.exe

> > Did you ever have one of those? I didn't, but I think the Atari Falcon
> > used one (which IIRC was at one time development platform for their
> Jaguar
> > console).
>
> Tha last Amigas, the 3000 and 4000, and Steve Jobs's NeXT used the 68040.
> Excellent chip, performance about equal to the i80486

In my experience better. Also the last generations of 68k Macs used 68040. And this were more than a few.

Amigas went on to also utilize later 68k cpu's, though mostly via 3rd party plugin boards/replacementboards. One of the FPC devels has a 68060.

Similar "upgrade" boards for the 68k macs contained PPC 601s mostly.

Afaik a common source of confusion is that 68040's are labelled according to their base speed, and not "base speed x multiplier" as Intel (DX2-66 = 2x 33 vs a 68040-33). Maybe because afaik the 68k wasn't entirely x2.

Steve(R)

Homepage E-mail

US,
30.07.2008, 20:07

@ marcov
 

HX question about link.exe

> > Tha last Amigas, the 3000 and 4000, and Steve Jobs's NeXT used the
> 68040. Excellent chip, performance about equal to the i80486
>
> In my experience better...

Well, it's an old debate. The 680x0 and x86 were designed for different purposes, each had strengths and weaknesses relative to the other. For running Unix the 68040 was superior, for some math operations the 486 was faster...

> Amigas went on to also utilize later 68k cpu's, though mostly via 3rd
> party plugin boards/replacementboards. One of the FPC devels has a 68060.

Commodore only went up to the 68040 on the 4000. But the Amiga's design made CPU upgrades easy, and there were always third-party options.

> Similar "upgrade" boards for the 68k macs contained PPC 601s mostly.

In the '90s there were rumors that Commodore would go to the PPC after the 4000. But then Commodore died, no more new Amigas.

> Afaik a common source of confusion is that 68040's are labelled according
> to their base speed, and not "base speed x multiplier" as Intel (DX2-66 =
> 2x 33 vs a 68040-33). Maybe because afaik the 68k wasn't entirely x2.

Correct - some operations were clock-doubled, some were speeded up (relative to the 68030) by other means - data and memory caches, pipelining, etc.

marcov(R)

30.07.2008, 22:42

@ Steve
 

HX question about link.exe

> > > Tha last Amigas, the 3000 and 4000, and Steve Jobs's NeXT used the
> > 68040. Excellent chip, performance about equal to the i80486
> >
> > In my experience better...
>
> Well, it's an old debate. The 680x0 and x86 were designed for different
> purposes, each had strengths and weaknesses relative to the other. For
> running Unix the 68040 was superior, for some math operations the 486 was
> faster...

Could be. Haven't done much float dependant stuff in general. (as long as it had one, I was happy). Maybe the fpu wasn't clockdoubled.

> > Amigas went on to also utilize later 68k cpu's, though mostly via 3rd
> > party plugin boards/replacementboards. One of the FPC devels has a
> 68060.
>
> Commodore only went up to the 68040 on the 4000. But the Amiga's design
> made CPU upgrades easy, and there were always third-party options.

Same with Mac.

It's simply the finite number of models I think. And the fact that new models remained expensive for longer, so upgrades were more worthwhile.

Specially the 840AV with its native 40MHz (overclocked) speed is nice, also because it can have 128 MB of ram with fairly ordinary 32MB 72 pins simms. And I bought the machine itself for Eur 15.

Of course to make the switch Amiga->Mac interesting at all, one must be not interested in the AmigaOS itself:-)

(after Commodore, I went to PCs. When I needed 68ks for FPC purposes, I chose Macs since they were easier and cheaper to come by. And I ran NetBSD or Linux as FPC reference OSes for the most anyway. For me AmigaOS or MacOS were bootloaders).

> > Similar "upgrade" boards for the 68k macs contained PPC 601s mostly.
>
> In the '90s there were rumors that Commodore would go to the PPC after the
> 4000. But then Commodore died, no more new Amigas.

3rd party ones and AmigaOS 4.0 brethren as Amiga One did. The 3rd party one (Phase5) also made upgrade cards for Mac.

I never got further than a Deringer 030/50 without (68882) FPU in a Amiga 500+, but said FPC coremember (the one with the 68060 upgrade card) has heaps of exotic Amiga stuff.

Steve(R)

Homepage E-mail

US,
31.07.2008, 05:09

@ marcov
 

HX question about link.exe

> > Well, it's an old debate. The 680x0 and x86 were designed for different
> > purposes, each had strengths and weaknesses relative to the other. For
> > running Unix the 68040 was superior, for some math operations the 486 was faster...
>
> Could be. Haven't done much float dependant stuff in general. (as long as
> it had one, I was happy). Maybe the fpu wasn't clockdoubled.

AFAIK the FPU wasn't doubled. Another issue was that it had fewer hardware operations than Intel's FPU - some were executed in software.

> Of course to make the switch Amiga->Mac interesting at all, one must be
> not interested in the AmigaOS itself:-)

:-) indeed.

> but said FPC coremember (the one with the 68060 upgrade card) has
> heaps of exotic Amiga stuff.

Including x86 plugin boards?

marcov(R)

31.07.2008, 20:45

@ Steve
 

HX question about link.exe

> > but said FPC coremember (the one with the 68060 upgrade card) has
> > heaps of exotic Amiga stuff.
>
> Including x86 plugin boards?

No idea, but I myself had a P-I 133 one for some PowerMac at some point.

marcov(R)

30.07.2008, 10:02

@ Rugxulo
 

HX question about link.exe

> I guess it just assumes that Windows was hosed so that they won't get tons
> of support calls saying, "I can't boot anymore!" (Better safe than sorry?)

Think too yes.

(Fedora)

Fedora usually is pretty nice. I haven't had time/interest to spin up another distro to test if it is Fedora or HW. But the fact that the problem (crashing kernel) persists over multiple versions makes me suspect is HW.

> FreeBSD supposedly supports it, but kernel + modules must all be compiled
> with that in mind (according to
> I don't think most home users have more than 4 GB (at most) just yet.

Yes, but it doesn't leave you an installation choice as e.g. Fedora does. I don't massively run VMs on FreeBSD atm, so no need to get that last bit.

> > Yes and no. It is simply the average larger data size. So unless you have
> > 32-bit bins or some form of a NEAR model.
>
> So what is the advantage of 64-bit then?? If more registers doesn't equal
> more speed but uses more memory, then it's crap, just a novelty, not worth
> bothering with.

The main reasons are: Memory limit lifted, and enough addressspace. (x86 was running out of that too. Where do you memory map a DVD image?). Don't forget that the reason you can only use 3.25GB out of 4 without PAE is because it ran out of addressspace.

And of course the gradual transition, contrary to the previous attempts of Alpha and Itanium. (and maybe PPC)

> > Btw, in general GCC is worse in CPU specific optimizations than other
> > compilers.
>
> Okay, yes, MSVC is supposedly better now, but it's only got one
> platform + architecture to target, so it's probably easier / simpler.

As you say MSVC and Intel. And yes it may be easier to do so, but I don't think that is it. It probably is more a maintainability issue.

(68040)

> Did you ever have one of those? I didn't, but I think the Atari Falcon
> used one (which IIRC was at one time development platform for their Jaguar
> console).

Still have. A Mac 840AV. I'm not the original owner though. 68kwise I had some base models Amiga after I left C=64, but nothing beefy. And then I went to 386sx-20. They are somewhat between a 486 and a pentium. Also iirc they are DX 1 1/2. IOW parts of the CPU run on double core speed (DX2), and parts not (DX).

> > Yes, but you can't introduce a whole new architecture every two years.
> > Till now it happened twice. The i386 and the x86_64. All x86_64
> machines
> > have SSE2.
>
> They sell these machines because they are either more powerful and/or
> faster than their predecessors.

And we all know there are lies, damn lies and commercials.

> I personally prefer software optimization over
> hardware upgrades because the former benefits everyone while the latter
> only benefits you (and potentially wastes money).

If you actually charge for the software optimization is way more expensive. Specially for the more specialized software.

> > You can always much freer in chosing an embedded CPU. While there might
> > be some legacy concerns, they are generally less.
>
> I don't know about your experiences or what languages / cpus you've
> programmed for, but what is your favorite architecture (if any) and why
> (price and power consumption not considered)? In other words, is 68000
> better than i386 in your eyes?

Pascal's, Modula2, C, Java bit of C++ and C# and asm for an arch or 5. (6510,68k,ppc,arm,x86,x86_64. Dabbled with sparc. And several "Microchip" embedded cpu's).

Nearly anything is better than x86, if only for its register starvation, but that was never the question. x86 was clumsy due to the moment in time it was frozen due to the dreaded PC compability. (on Amiga you HAD heaps of programs you could only run with certain upgrades, CPU and videocard )

The i386 arch was already a lot better, and the 64-bit one is again better. And price/performance was always stronger there.

Rugxulo(R)

Homepage

Usono,
30.07.2008, 01:19

@ marcov
 

HX question about link.exe

> > What I heard was that 64-bit was 10% faster, but a lot less drivers
> work.
>
> Not my experience, though I can imagine it could be true for something
> that is really sensitive to the changes (like e.g. Photoshop).

Well, here's my wimpy experiment.

Tue Jul 29 14:58:03 UTC 2008
AMD64x2 TK-53 1.7 Ghz w/ 1 GB RAM

g++ -s -static -O3 -fomit-frame-pointer paq7asm-x86_64.o paq8o8.cpp -o paq8o8.linux64 -DDEFAULT_OPTION=1

./paq8o8.linux64 -1 doydoy1 paq8o8.cpp
Creating archive doydoy1.paq8o8 with 1 file(s)...
paq8o8.cpp 142636 ->
142636 -> 33816
Time 3.53 sec, used 37286859 bytes of memory

./paq8o8.linux64 -5 doydoy2 paq8o8.cpp
Creating archive doydoy2.paq8o8 with 1 file(s)...
paq8o8.cpp 142636 ->
142636 -> 30507
Time 21.30 sec, used 224343431 bytes of memory

paq8o8.linux64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.8, statically linked, stripped

Linux ubuntu 2.6.24-19-generic #1 SMP Wed Jun 18 14:15:37 UTC 2008 x86_64 GNU/Linux


P.S. The static binary is 743k (vs. 670k for 32-bit static). This isn't much faster at all. Oh well. (Even takes 11 secs. to compile even though the whole thing is in RAM!) And BTW, I tried to run my previous 32-bit compiles but neither seemed to work (permission denied, for whatever reason), even as root. Meh.

marcov(R)

29.07.2008, 22:16

@ Rugxulo
 

HX question about link.exe

(part 2)

> P.S. QEMU loading latest XUbuntu x86-64 was taking forever (15+ minutes)
> and still didn't get past the moving bar screen, so I had to kill it. So
> at least at the moment that kind of emulation is not ideal.

I still hardly use virtualization/hypervisor in practice. Problem is that the odd ball cases that I actually need (e.g. openbsd) are the interesting ones for me (for FPC testing). So you always get one or two running, but not the third.

In the end, simply setting it up on a separate partition always speedier and easier. I've a few older machines, and recently got an Athlon XP donated that replaced my aging (read: dying) P-III. In theory it is more work maintaining all the machines and OSes vs a few easy VM images, but in practice somehow it never worked out for me. Due to FPC cycling I do need a bit of performance though, that is 30MB source that I compile to see if something fixed.

Rugxulo(R)

Homepage

Usono,
30.07.2008, 01:00

@ marcov
 

HX question about link.exe

> (part 2)
>
> > P.S. QEMU loading latest XUbuntu x86-64 was taking forever (15+
> minutes)
> > and still didn't get past the moving bar screen, so I had to kill it.
> So
> > at least at the moment that kind of emulation is not ideal.
>
> In the end, simply setting it up on a separate partition always speedier
> and easier. I've a few older machines, and recently got an Athlon XP
> donated that replaced my aging (read: dying) P-III.

They just don't last long enough, do they? :-( Then again, a lot of people prefer upgrading to newer / better anyways. I'm not sure it truly makes us more productive, though (esp. considering the learning curve).

> In theory it is more
> work maintaining all the machines and OSes vs a few easy VM images, but in
> practice somehow it never worked out for me.

Apparently, some VMs have problems with images created on different host cpus. Don't ask me why.

> Due to FPC cycling I do need a
> bit of performance though, that is 30MB source that I compile to see if
> something fixed.

Yeah, compiling is slow enough without being ultra slower in emulation. Then again, if emulation works, even if slow, it's better than nothing (IMHO). We could solve the problem entirely if we had full development suite live CDs (instead of ones that seemingly only target multimedia / office users). We need more cross build tools. I'm surprised no one has done this already (AFAICT).

marcov(R)

30.07.2008, 22:33

@ Rugxulo
 

HX question about link.exe

> > In the end, simply setting it up on a separate partition always
> speedier
> > and easier. I've a few older machines, and recently got an Athlon XP
> > donated that replaced my aging (read: dying) P-III.
>
> They just don't last long enough, do they? :-(

They are mostly used for FPC testing (except one FreeBSD and one Linux). And if you have a handful of packages, reinstalling is actually faster than doing long term package management and upgrades.

> Then again, a lot of
> people prefer upgrading to newer / better anyways. I'm not sure it truly
> makes us more productive, though (esp. considering the learning curve).

I'm inbetween. I'm never the first, unless I have a special interest (e.g. I went to FreeBSD 5 at the time because of cardbus support, and later to 6 due to wlan capabilities).

However I don't try to personally breathe life into a dying horse out of principles.

> > Due to FPC cycling I do need a
> > bit of performance though, that is 30MB source that I compile to see if
> > something fixed.
>
> Yeah, compiling is slow enough without being ultra slower in emulation.
> Then again, if emulation works, even if slow, it's better than nothing
> (IMHO).

Well, the problem is that with testing on emulations always keeps the "but does it work on the real thing" question. Moreover, simply installing it on a separate partition is not that hard since HDs got bigger than say 8GB. (actually one reason to go to hypervisor is that the partition table then no longer limits the number of installations. It's a tradeoff though, even though not really an emulation, the HD speed to images suffers)

> We could solve the problem entirely if we had full development
> suite live CDs (instead of ones that seemingly only target multimedia /
> office users).

Please no. I have done so, dog slow. Constantly spinning CD etc.

> We need more cross build tools. I'm surprised no one has
> done this already (AFAICT).

Also the products of cross build tools need validation.

Rugxulo(R)

Homepage

Usono,
30.07.2008, 16:32
(edited by Rugxulo, 30.07.2008, 16:49)

@ marcov
 

HX question about link.exe

> Total RAM use under Windows is higher, but not twice. This is logical,
> since most are buffers anyway. Moreover, both systems can run also 32-bit
> binaries, and e.g. 64-bit bootcds often only have a 64-bit kernel, but a
> complete 32-bit userland. So does e.g. Mac OS X (which installs a 64-bit
> kernel on x86 during setup if it sees that you have enough ram iirc)

AFAICT, Xubuntu x86-64 is all 64-bit userland (according to file, at least). In general, I think compiling for 32-bit is done via something like "-m 32", but I've never tried (and you also need the 32-bit libraries too).

> If you see programs in the process manager, they are nearly twice the
> size. Say 170-180% of the 32-bit eq. Similarly on disc.

I know I'm beating a dead horse by mentioning this, but UPX does actually support Linux 64-bit binaries.

> Yes, but you can't introduce a whole new architecture every two years.
> Till now it happened twice. The i386 and the x86_64.

While true, I would also add that the 286 itself seems to have been a fair change itself, mainly because of several factors: IBM's penchant for selling lots of it and committing to it via OS/2, the fastest speed increase between subsequent processors ever, lack of (official) way to switch out of pmode (e.g. breaking compatibility), as well as obviously adding pmode to the instruction set and increasing total RAM potential to 16 MB. Granted, it didn't add or extend any registers, but it was still a big deal at the time (from what I hear).

>> I don't know about your experiences or what languages / cpus you've
>> programmed for ...
>
> Pascal's, Modula2, C, Java bit of C++ and C# and asm for an arch or 5.
> (6510,68k,ppc,arm,x86,x86_64. Dabbled with sparc. And several
> "Microchip" embedded cpu's).

I assume you would agree with my (blind) guess that FreePascal probably does everything Modula2 does and more (unlike original Pascal). And I was curious about Modula2 recently even though I've never tried it. I did find this interesting:

http://cfbsoftware.com/modula2/

> The Lilith executes M-code, a pseudo-code similar to the P-code of
> the Pascal compilers also designed by N. Wirth. The M2M-PC System is
> an M-code interpreter for the IBM-PC running DOS 2.0 developed by
> the Modula Research Institute allowing the Lilith Modula-2 compiler
> and its output to be executed on the IBM-PC.

> The first Modula-2 compiler was completed in 1979 and ran on the
> DEC PDP-11. This is the source code of the PC version of the
> second Modula-2 compiler. It generates M-code for the Lilith and can be
> compiled and run using the M2M-PC System.

Of course, I suspect something like FST's compiler are more popular / useful? / compliant? (And there are various M2 -> C convertors as well as GNU Modula2 which apparently is src-only.)

http://en.wikipedia.org/wiki/Modula_2

marcov(R)

30.07.2008, 22:57

@ Rugxulo
 

HX question about link.exe

> > Total RAM use under Windows is higher, but not twice. This is logical,
> > since most are buffers anyway. Moreover, both systems can run also
> 32-bit
> > binaries, and e.g. 64-bit bootcds often only have a 64-bit kernel, but
> a
> > complete 32-bit userland. So does e.g. Mac OS X (which installs a
> 64-bit
> > kernel on x86 during setup if it sees that you have enough ram iirc)
>
> AFAICT, Xubuntu x86-64 is all 64-bit userland (according to file,
> at least). In general, I think compiling for 32-bit is done via something
> like "-m 32", but I've never tried (and you also need the 32-bit libraries
> too).

It works, but it is messy for people that don't know the drill. Trying to do this learns you how to up gcc and binutils verbosity options :-)

> > If you see programs in the process manager, they are nearly twice the
> > size. Say 170-180% of the 32-bit eq. Similarly on disc.
>
> I know I'm beating a dead horse by mentioning this, but UPX does actually
> support Linux 64-bit binaries.

And not 32-bit? Since even _IF_ I were that braindead, the ratio's would be roughly equal still for upxed 64-bit/upxed 32-bits.

> > Yes, but you can't introduce a whole new architecture every two years.
> > Till now it happened twice. The i386 and the x86_64.
>
> While true, I would also add that the 286 itself seems to have been
(...)

It was a big deal for the OS and maybe driver creators only. Not for the app programmers. If you have to count those, you also need to count PAE.

> > Pascal's, Modula2, C, Java bit of C++ and C# and asm for an arch or 5.
> > (6510,68k,ppc,arm,x86,x86_64. Dabbled with sparc. And several
> > "Microchip" embedded cpu's).
>
> I assume you would agree with my (blind) guess that FreePascal probably
> does everything Modula2 does and more (unlike original Pascal).

Not everything, but the missing pieces are fairly minimal. M2 has the better concepts and syntax, but the improvements on Pascal are not crucial enough to use a less compiler.

> Of course, I suspect something like FST's compiler are more popular /
> useful? / compliant? (And there are various M2 -> C convertors as well as
> GNU Modula2 which apparently is src-only.)

IMHO there is no good one, probably Stonybrook comes closest. FST is ages old, and was never extended beyond the minimal.

GNU M2 moves very slowly, and has similar problems as GNU Pascal. I hope that Freebasic can avoid the pitfalls these two projects had with recycling gcc.

Topspeed was nice, and is IMHO still one of the best, if not THE best 16-bit compiler. (also had C++ and an ISO Pascal with minimal TP mode)

Still, I abandoned M2 for Pascal because of compiler quality, and I never looked back.

(added in postedit)

The M2 crowd in my days also seemed more interested in correctness and standarization details than in general purpose application building. Probably most were either teachers or embedded programmers, two fields were M2 was extensively used.

Rugxulo(R)

Homepage

Usono,
31.07.2008, 20:12

@ marcov
 

HX question about link.exe

> For running Unix the 68040 was superior

How so? Which Unix are we talking about?

>> We could solve the problem entirely if we had full development
>> suite live CDs
>
> Please no. I have done so, dog slow. Constantly spinning CD etc.

You would obviously want to copy it to RAM to avoid the slow CD access times, esp. for compiling stuff.

>> While true, I would also add that the 286 itself seems to have been ...
> It was a big deal for the OS and maybe driver creators only. Not for the app programmers. If you have to count those, you also need to count PAE.

I think it was too fast for some pre-existing common apps, incompatible (push sp, pop sp), especially regarding the "pmode-only" approach (e.g. some stuff wouldn't run). You couldn't even run multiple DOS apps at the same time, which made Bill Gates (supposedly) call it "braindead". So it was a big switch. OS/2 1.1 was probably the last to fully support 286s. And WfW 3.11 was 386 only. For whatever reason, the 386 became much much more popular, even to the point of heavily eclipsing the 286 in software development (even for apps that didn't need that much RAM). You don't see a lot of 286 pmode assembly apps these days. (No free 286 DOS extenders that I know of.) With the exception of some Borland tools, I'm not sure it was ever that well supported (from my limited view in hindsight).

As far as PAE, it hasn't really taken off yet (except for server OSes??). We don't all have that much RAM (comparatively) yet. Maybe that's the way to go for keeping V86 + 16-bit stuff working even with high amounts of RAM. So who really needs x86-64? ;-)

Steve(R)

Homepage E-mail

US,
01.08.2008, 05:12

@ Rugxulo
 

HX question about link.exe

> > For running Unix the 68040 was superior
>
> How so? Which Unix are we talking about?

The 68000 was the CPU of choice for early developers of Unix workstations. It was at least in part a happy coincidence - the 68000 introduced nice new features around the time when the workstation developers were shopping for CPUs. Sun and HP, to name only two, used the 68000 in their earliest workstations. Sun's OS was based on BSD.

In the same period when Unix was being fitted to the 680x0, Motorola worked to fit the 680x0 to Unix, adding hardware features that supported, e.g., multitasking, and supplying reference C compilers.

The Amiga OS was written in C, and partly based on Unix (Hacker's note: OS code was available to users for modification). The NeXT machines used the 68030 and 68040, running an OS based on Mach Unix. The NeXT OS later was ported to other CPUs and forked into a bunch of other OSes, including Mac OS X.

marcov(R)

01.08.2008, 12:34

@ Steve
 

HX question about link.exe

> > > For running Unix the 68040 was superior
> >
> > How so? Which Unix are we talking about?
>
> The 68000 was the CPU of choice for early developers of Unix workstations.
> It was at least in part a happy coincidence - the 68000 introduced nice new
> features around the time when the workstation developers were shopping for
> CPUs. Sun and HP, to name only two, used the 68000 in their earliest
> workstations. Sun's OS was based on BSD.

I robbed most of my 68882 FPUs (for on 68030/50 accelerator boards, both Amiga and Mac) from Hewlett-Packard 9000 Series 300 that were trashed in the physics department.

RayeR(R)

Homepage

CZ,
29.07.2008, 14:12

@ Rugxulo
 

HX question about link.exe

> There's no V86 mode in 64-bits. Long mode encompasses a compatibility
> submode and a 64-bit submode (I think). So it's not that NTVDM was
> removed, it just doesn't work on 64-bits due to the processor. (However,
> you'd think MS would emulate it a la DOSEMU or DOSBox, but no. If it
> doesn't make them money, I guess they don't care.)

Aha, I though that 64bit mode is some kind of instruction extension with leaving all 32bit and older stuffs. So then it is problem. But new CPU has virtualization technology. I don't know about low level how it works - could it be usefull for VM replacement?

---
DOS gives me freedom to unlimited HW access.

Back to index page
Thread view  Board view
15189 Postings in 1365 Threads, 250 registered users, 11 users online (1 registered, 10 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum