Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to index page
Thread view  Board view
w3a537(R)

E-mail

Colorado Springs CO USA,
03.07.2014, 22:39
 

FREE Basic & Pascal (Developers)

On each of these forums I asked what the size of the
"hello world" program would be.

FB= 93k
FP= 170k

I am about to end my partition in these forums
but there are beginning to be some outlandish
(maybe too strong) comments if anyone cares to
take a peek. I have asked for explinations (sp)
none have been posted yet.

SB

Rugxulo(R)

Homepage

Usono,
04.07.2014, 05:55

@ w3a537
 

FreeBasic and FreePascal

> On each of these forums I asked what the size of the
> "hello world" program would be.
>
> FB= 93k
> FP= 170k

1). FreeBASIC is a native compiler that can compile itself (to assembly) and linked with its own RTL (C?) and libc (here, from DJGPP). AFAIK, there are no real size optimizations, outside of "strip" (remove debugging info). The COFF linker doesn't delete unused code, so everything has to be manually massaged and put into the library, which was never a priority for anybody involved in DJGPP.

EDIT: I almost forgot, FreeBASIC supports several dialects. Even for my really simple and silly Befunge interpreter, I found that my "qb" code was 45 kb larger than the similar "fblite" .EXE. So if you can live with the dialect differences, switching to that (or even "fb") might help.

2). FreePascal also compiles itself but has its own homegrown RTL and units which don't need a C library at all. Here I'm sure that it can be much much smaller. Try smartlinking etc. (-CX -XX -Xs -O3). Smartlinking is not default because it is much slower to compile, which is a "bad thing" for quick development.

3). If none of that is good enough, try UPX: upx --best --lzma --all-filters

> I am about to end my partition in these forums
> but there are beginning to be some outlandish
> (maybe too strong) comments if anyone cares to
> take a peek. I have asked for explinations (sp)
> none have been posted yet.

I don't have even the slightest clue what you're referring to. But the Internet is full of all types of cranks (ahem, comments on every YouTube video ever), so it's probably just normal everyday angst. Don't sweat the small stuff.

marcov(R)

04.07.2014, 14:01

@ Rugxulo
 

FreeBasic and FreePascal

> I don't have even the slightest clue what you're referring to. But the
> Internet is full of all types of cranks (ahem, comments on every YouTube
> video ever), so it's probably just normal everyday angst. Don't sweat the
> small stuff.

Some people are simply oversensitive.

Oh, and he apparently missed the post with the fpc 16-bit results:

Using last fall's 16-bit snapshot (tiny memory model:)

D:\pp16>file hw.com
hw.com: DOS executable (COM)

D:\pp16>dir hw.com
07/04/2014 10:49 16,528 hw.com

Rugxulo: -O3 won't do much on a program that only contains one statement (writeln('hello world');)

w3a537(R)

E-mail

Colorado Springs CO USA,
04.07.2014, 20:56

@ marcov
 

FreeBasic and FreePascal

It really all started when somebody criticized me over my floppy drive.

And the fact that my base DOS system can be copied to a floppy and it fits.

I would provide it here on BTTR but I'm not sure about the licenses for
all the programs.

And I was criticized over my preference of small programs. Selecting small
is a waste of time they said.

And then somebody told me to buy a big USB stick.

None of these had anything to do with FP, but they became major components
of the thread.

SB

w3a537(R)

E-mail

Colorado Springs CO USA,
04.07.2014, 21:09

@ w3a537
 

FreeBasic and FreePascal

-O3 won't do much on a program that only contains one statement (writeln('hello world');)

This is very true. This is why I changed the target to a GREP utility.
These programs have a definite use.

I was just trying to find out about FP when the very next post critisized (sp)
my floppy drive.

SB

Rugxulo(R)

Homepage

Usono,
15.07.2014, 18:43

@ w3a537
 

FreeBasic and FreePascal

> -O3 won't do much on a program that only contains one statement
> (writeln('hello world');)
>
> This is very true. This is why I changed the target to a GREP utility.
> These programs have a definite use.

DJGPP GNU Grep 2.11 (2.03p2, aka /current/, its latest / last available version):

fgrep.exe 173568
egrep.exe 245248
grep.exe 246784
greppcre.exe 332800

DJGPP GNU Grep 2.11 (2.04, aka /beta/):

fgrep.exe 243712
egrep.exe 315904
grep.exe 316928
greppcre.exe 403456

DJGPP GNU Grep 2.20 (2.04, aka /beta/):

egrep.exe and fgrep.exe are (binary, 2048 bytes) symlinks to the main grep.exe, which is 1,383,936 bytes.

And old XGREP.COM (written in 8086 assembly) is 3380 bytes.

As you probably know, DOS "find" is more or less a simple grep clone without the regexp (but see modern Windows' FINDSTR). Latest FreeDOS FIND 3.0a with LFN support is (unpacked from UPX) only 13528 bytes (but 16-bit code is often smaller, max instruction size for 8086 and 286 is 6 and 10 bytes, respectively).

Obviously XGREP doesn't support filename globbing nor LFNs nor PCRE. And DJGPP 2.04 supports "full" symlinks (thus not only for .EXEs), which bloats up the binaries more than normal, even for same version. And note that these DJGPP binaries were probably compiled for speed (-O2) not size (-Os).

Rugxulo(R)

Homepage

Usono,
15.07.2014, 18:59

@ w3a537
 

FreeBasic and FreePascal

> This is very true. This is why I changed the target to a GREP utility.
> These programs have a definite use.

Another obvious comparison is assembly, which is as low a level as you can go.

Even old old (DJGPP 2.x) BinUtils AS 2.6 (1996) is 319488 bytes. GAS 2.19 circa 2008 (from 2.03p2) is 834048 bytes. Latest AS 2.24r2 (from 2.04 /beta/) is 1,265,152 bytes.

Venksi's ASM.COM (8086 only, binary output only, assembles itself) is only 7297 bytes (uncompressed).

FASM is a more robust, standalone assembler, so it's a better tool. Totally written in itself, 32-bit host but supports all targets (16-bit, 32-bit, 64-bit) and several output formats (COFF, ELF, MZ, PE) and literally all (AFAIK) current Intel and AMD instructions.

Here is latest 1.71.21:

93089 fasm (Linux native version)
95452 fasm.exe (DOS version)
102400 fasmcon.exe (Win32 console version)
120660 fasmd.exe (DOS version plus IDE, aka text editor and calculator)
143872 fasmw.exe (Win32 version plus GUI IDE)

Note that this is 100% assembly, no HLLs used! It's not even UPX'd! And honestly, a lot of that is data, even, since he's supporting all kinds of weird stuff (AVX, heh). Just for quick comparison to itself, 1.60 (circa 2005) for DOS was only 65163 bytes.

Rugxulo(R)

Homepage

Usono,
15.07.2014, 19:30

@ w3a537
 

FreeBasic and FreePascal

> -O3 won't do much on a program that only contains one statement
> (writeln('hello world');)
>
> This is very true. This is why I changed the target to a GREP utility.
> These programs have a definite use.
>
> I was just trying to find out about FP when the very next post criticized
> my floppy drive.

A few years ago, I got the crazy idea to cram DJGPP on a 1.44 MB floppy. Of course it won't actually fit, so you have to compress it. What do you use? I originally used UHarc (needing 24 MB to unpack, and very slow), then later migrated to 7-Zip (and tiny 7zdecode, needing only roughly 6 MB to unpack, plus much faster). However, the "newest" DJGPP I could cram on there was 2.95.3 and BinUtils 2.16.1 with DJGPP libc 2.03p2 and a few other tools (make, ar, ed, rm). I couldn't fit strip, but "ld -s" does the same thing. Also, including "ed" was more or less my attempt at providing both text editor plus grep plus sed all-in-one.

I ended up recompiling some stuff (IIRC, with GCC 4.2.3) for smaller size since usually they don't use "-Os" but prefer "-O2". Anyways, for comparison, stock GCC2953B.ZIP's CC1.EXE was 1,621,504 bytes where my recompile was 1,388,032. GCC 3.4.4's CC1.EXE (2.03p2?) was 4,212,224. Latest (2.04 /beta/ only) GCC 4.9.0's CC1.EXE is 15,596,544.

But keep in mind that latest 4.x does a lot more than 2.x or even 3.x ever did. I can already hear Marco saying, "15 MB is peanuts." Honestly, the real problem for a tool like that isn't disk footprint, nor even speed, but RAM usage. It uses much much more than 15 MB of RAM, optimizations or not, for data, not its own measly footprint. But even for its footprint, in theory, the DPMI host could demand page in the .EXE. Unfortunately, most (e.g. CWSDPMI) don't do that and just load the whole thing in one lump.

And BTW, nobody ever told me that wasting time on such a "EZ-GCC 2" was worth doing. I'm not sure 99% ever even tried it. That was back starting in 2009. Obviously people these days are even less sympathetic towards floppies! That GCC is "too old" for most projects. Nobody wants vanilla standard minimal console stuff: they want GUIs, networking, threads, you know ... all the non-portable flashy stuff. I just figured easing deployment of a minimal DJGPP would make it easier to bring the toolset to old computers. (Unfortunately, a 486 is too slow for such a version, -O0 was 2x or 4x slower than 2.7.2.1 "-O2"! But at least you can fit all of it on a small RAM disk, which combined with a software cache will speed up compiles tremendously.)

RayeR(R)

Homepage

CZ,
15.07.2014, 23:47

@ Rugxulo
 

FreeBasic and FreePascal

Does this whining still make a sense? It was discussed (not only) here many times. New compilers are of course more complex as HW getting more complicated, language is extending, code analysis getting deeper (e.g. newer gcc can recognize some dangerous part in code and produce a warning while older version kept silent)... So if you want to keep on track and make ports of latest projects you need new compiler and libs and you also need an appropriate hardware. I think that an old PII/III/4 is still good for running latest DJGPP and you can now pull out such computer from a trashcan for free :)
If you need keep old HW then you also keep older version of compiler and other SW. Simple...

---
DOS gives me freedom to unlimited HW access.

marcov(R)

17.07.2014, 09:23

@ RayeR
 

FreeBasic and FreePascal

> I think that an old PII/III/4 is still good
> for running latest

And in the likely case your slow machine isn't your only one, you crosscompile.

> you can now pull out such computer from a trashcan for free :)

(I would go for athlon X2's and PentiumDs. Fewer harddisk size limits, 64-bit, DDR2 and SATA, come with more HDD and mem installed same price. Maxing out memory in old computers can be costlier than doing the same for a slightly newer model even if that one takes more GBs)

RayeR(R)

Homepage

CZ,
17.07.2014, 11:02

@ marcov
 

FreeBasic and FreePascal

> And in the likely case your slow machine isn't your only one, you
> crosscompile.

Yes, thanks to DMP guys we now have win32 port of djgpp that can run also under win64 :) Linux and MAC too...

> Maxing out memory in old computers can be costlier than doing the same for
> a slightly newer model even if that one takes more GBs)

Hehe, I know. I was looking for a DDR1 SO-DIMM for my Compaq EVO N620c notebook and wonder how expansive it is sell here (about 10$ for 512M and 25$ for 1GB) more than twice compared to DDR2 or DDR3. But fortunatelly there are chinese ebay sellers of old stock modules and I was happy to buy 512M Micron from 2005 for just a 3$ (with free shipping :)

Anyway people trash a lot of desktop computers that are not capable well running Vista/7/8 WOW bloatware but it's still well usable for normal work...

---
DOS gives me freedom to unlimited HW access.

marcov(R)

17.07.2014, 11:32

@ RayeR
 

FreeBasic and FreePascal

> Anyway people trash a lot of desktop computers that are not capable well
> running Vista/7/8 WOW bloatware but it's still well usable for normal
> work...

I got them as a build cluster running various *nixes, 2,3 each. Originally got them because they are 64-bit capable, and why I had enough dumpster Athlon XPs, none of them were 64-bit capable.

Yes, they are only 20-30% faster than running the same build virtualbox on my main machine, but I can run multiple ones at the same time. I typically boot them 3 times per release cycle (typically 9 months). A pre check build before branching, release candidate and final release.

Unfortunately, since the original plan, the number of targets that I build exploded, so probably this approach is on the way out.

marcov(R)

05.07.2014, 14:27

@ w3a537
 

FreeBasic and FreePascal

> It really all started when somebody criticized me over my floppy drive.

That was me, and the USB stick too. Both resulting from doubts about the core problem posed.

> And I was criticized over my preference of small programs. Selecting small
> is a waste of time they said.

(it _is_ IMHO)

> None of these had anything to do with FP, but they became major components
> of the thread.

It is in the sense that FP is a modern development system.

Anywa, the answer to your question is simple:

Use the oldest possible compiler, code your programs as low level as possible, and use a customized RTL that cuts away things you don't use. (and the older you go the less cutting you will do probably)

For Turbo Pascal code that is probably TP55, BP7 whichever has the best ratio wrt code generation and customizable RTL.

Maybe FPC 16-bit has long term better papers (it has potential for better optimization and since it is open source you could in theory beyond that), but that is not here and now, and the RTL customization will probably be more involved anyway (since it is more complicated)


If I offended you I'm sorry, but please take the fact into consideration that I have answered such questions for 17 years now, and never saw a constructive outcome. People seem to be shopping around to keep their aging setup running for a while longer, but it isn't allowed to cost any effort. Basically it is compiler shopping.

w3a537(R)

E-mail

Colorado Springs CO USA,
05.07.2014, 18:44

@ marcov
 

FreeBasic and FreePascal

> It really all started when somebody criticized me over my floppy drive.
>
> That was me, and the USB stick too. Both resulting from doubts about the
> core problem posed.

Why should your dislike of floppies have to carry over to me?

I have a whole bunch of usb sticks and I can access them from DOS just fine.
What do you want me to do with them.

As I said on FP all my OS and OS utilities are on a HD and they fit
on a floppy. My large apps are on a USB stick in ZIP files loaded to ramdisk
in a.bat.

>
> > And I was criticized over my preference of small programs.
> Selecting small is a waste of time they said.
>
> (it _is_ IMHO)
>

For example on my big USB where I store a lot of things I have 5 TEEs
ranging from 57kb to 5kb and I selected the 5kb for my OS utilities.

Why was this a waste of time in your opinion.

> None of these had anything to do with FP, but they became major components
> of the thread.

This remains true.

>
> It is in the sense that FP is a modern development system.
>
> Anywa, the answer to your question is simple:
>
> Use the oldest possible compiler, code your programs as
> low level as possible, and use a customized RTL that cuts
> away things you don't use. (and the older you go the less cutting you
> will do probably)

I have been using G77 for a long time.
I just wanted to ask about FP to see if it would be interesting to check
out and FB too. The answer is no. I can stay with G77.

Why was such a bit rucus necessary?

>
> For Turbo Pascal code that is probably TP55, BP7 whichever has the best
> ratio wrt code generation and customizable RTL.
>

I have been maintain DOS systems for people in my companies as far back
as 1985. One master floppy disk, format the users hard disk, SYS it, and
copy over all the files to the bootable HD. Boot and run.

> Maybe FPC 16-bit has long term better papers (it has potential for better
> optimization and since it is open source you could in theory beyond that),
> but that is not here and now, and the RTL customization will probably be
> more involved anyway (since it is more complicated)
>
> If I offended you I'm sorry, but please take the fact into
> consideration that I have answered such questions for 17 years now, and
> never saw a constructive outcome. People seem to be shopping around to
> keep their aging setup running for a while longer, but it isn't allowed
> to cost any effort. Basically it is compiler shopping.

MY bootable floppy, I have maintained it since 1985 and I upgrade it about
every 5 years.

Is this your opinion od all the people here on BTTR never constructive outcome.

Sorry but I have a fine DOS system, I use it when It is the way to go,
I run SPINRITE once in a while to refresh my disks and if and when WIN7
crashes I will use IMAGE FOR DOS from Tettabyte Unlimited to restore it.

SB

Rugxulo(R)

Homepage

Usono,
15.07.2014, 18:27

@ marcov
 

FreeBasic and FreePascal

> > And I was criticized over my preference of small programs. Selecting
> small
> > is a waste of time they said.
>
> (it _is_ IMHO)

What email provider do you use? What is the file attachment limit? How much of that is eaten up by the MIME/base64 conversion?

What ISP do you use? What download/upload speeds do you have, and do you have any caps on total usage?

What motherboard do you use? How much total RAM can you install?

What OSes do you use? How much total RAM do they see? How much can an individual app appropriately utilize?

What file systems do you use? What is the maximum partition size and maximum file size? What cluster size is being used (slack space)?

Note that many Linux distros (et al.) have trouble cramming everything even on a CD, so some of them have switched to DVD. And of course DVD is too small for some things, hence dual-layer (or beyond) or Blu-Ray, etc. etc. So it's not like floppies are the only ones running out of space.

> Use the oldest possible compiler, code your programs as
> low level as possible, and use a customized RTL that cuts
> away things you don't use.
>
> For Turbo Pascal code that is probably TP55, BP7 whichever has the best
> ratio wrt code generation and customizable RTL.

TP55 is very small but also not very complicated. And obviously pure assembly is going to be smaller than almost anything.

> If I offended you I'm sorry, but please take the fact into
> consideration that I have answered such questions for 17 years now, and
> never saw a constructive outcome.

1997? You seriously want to pretend that you're using the same (or even similar) setups as back then?? Not even the same dialects or architectures! (TP only and no AMD64.) And size usually mattered more back then than now.

I'm not saying size is very important in most cases. Obviously people don't know, don't care, don't have time, "it just works", etc. But it's ridiculous to pretend that it's somehow bad to actually identify and fix the problem!
(But presumably you're just complaining about the complainers, not those actually doing work to fix this. Not that anybody is actually fixing anything here, sigh.)

> People seem to be shopping around to keep their aging setup running
> for a while longer, but it isn't allowed to cost any effort.
> Basically it is compiler shopping.

He mentioned G77 (presumably with DJGPP; and not Gfortran??), so no idea why he's looking at FBC and FPC. Obviously the point is that different languages aren't comparable. AFAIK, nobody chooses a compiler for size alone but instead for what language dialects and what target architectures (and target OSes) it supports. Size doesn't matter if you don't grok the compiler language!

But I don't think you were really advocating here that everyone should rewrite all their code every other year, just to follow latest trends. The idea that everything must be handled by the end user, no matter how complicated, and totally ignoring legacy concerns (which is all too common these days), is ridiculous. Obviously FPC isn't that stern (thankfully).

marcov(R)

17.07.2014, 11:12

@ Rugxulo
 

FreeBasic and FreePascal

> > If I offended you I'm sorry, but please take the fact into
> > consideration that I have answered such questions for 17 years now, and
> > never saw a constructive outcome.
>
> 1997? You seriously want to pretend that you're using the same (or even
> similar) setups as back then??

No, but the comparisons were much the same. Though mostly TP/BP7, not TP5.5 which was nearly completely forgotten by then. It only came back into view due to Borland setting it free.

And despite being in the last kickings of the mainstream DOS age, the writing was clearly on the wall.

Even people still creating dos programs (like me then) wanted productivity. Getting rid of the 640k barrier and LFN was the norm (nearly all ran under win9x, even only to multitask dosboxes).

Of course sb would whine that BP7 would generate smaller binaries, and pointed to some badly maintained LFN unit for LFN support and protected mode (that killed the 640kb limit, but not the 64k limit) as scapegoats. Then sb else would comment that TP6 generated even smaller binaries.

Then sb would whip out TP4 and convert the results to .COM, finally sb wrote an application that did a bit of 32-bit register access and an LFN int in assembler, and proved that assembler was Turing complete. (not that anybody asked)

> Not even the same dialects or architectures!

I then did dos and a bit win32 (win95SE) in Pascal. Originally I installed Windows because of a traineeship, and my mentor demanded reports in Word (I used WP60). Privately I only used it to multitask dosboxes. .

Programming was mostly go32v2. I don't think the win32 port existed yet, and dabbling with Linux on a 486 server.

> (TP only and no AMD64.) And size usually mattered more back then than now.

Not that much, not even in 1997 it mattered much. CD burners had arrived, and while they were quite expensive (500-600Eur which was even more back then), you simply took your HD (bigfoot 2.5GB iirc) to a shop with a burner, who would burn it for Eur 20 (later 10).

Yes, people still made rescue floppy discs back then and shoehorned something on it (yes, even I used UPX once), but one didn't center the whole world around it anymore.

> I'm not saying size is very important in most cases. Obviously people don't
> know, don't care, don't have time, "it just works", etc. But it's
> ridiculous to pretend that it's somehow bad to actually identify and fix
> the problem!

That's the point. It is only a *PROBLEM* in minimalists minds. Even back then I had better things to do with my time. You only do something about size, if it is really, really prohibitive, not because of some underbelly feeling that it is not "right' (e.g. during a brief WINCE stint).

That's the whole issue here. Micromanaging size without a direct reason is an hobby. Sure, the proponents try to dress it up (I still have this old 8088 machine, I still deliver through 2400 baud lines, I spend my days creating bootdisks), but it is still the same.

Nobody is really planning any form of development traject. They just want to be confirmed that they reached the pinnacle of their personal quest, the smallest binary and demonstrate their "knowledge".

> (But presumably you're just complaining about the complainers, not those
> actually doing work to fix this)

The sadness of it all is, that despite my reluctance, I'm still probably doing more to fix this. At least in FPC, and I have been doing it for a long time now. It might be not enough in your eyes, but that doesn't mean that nothing is done, and that there is not an eye for the worst excesses.

Much of the current work is being done for embedded targets (MIPS and ARM). that are OSless, diskless, and nearly RTLless.

Rugxulo(R)

Homepage

Usono,
18.07.2014, 21:53

@ marcov
 

FreeBasic and FreePascal

> > 1997? You seriously want to pretend that you're using the same (or even
> > similar) setups as back then??
>
> No, but the comparisons were much the same.

Not at all, it was a different world. We never thought we'd ever have (much less need) multi-core Gigahertz or Gigabytes of RAM or Terabytes of hard disk or 64-bit cpus at home. Back then, 32-bit / 4 GB max RAM / FAT32 was good enough for everyone.

> Though mostly TP/BP7, not TP5.5
> which was nearly completely forgotten by then. It only came back into view
> due to Borland setting it free.

Just to compare to 1997's GCC for x86: 2.7.2.x only supported 386 and 486 [sic], and -Os didn't exist yet. But it was still good enough for Quake!

> And despite being in the last kickings of the mainstream DOS age, the
> writing was clearly on the wall.

MS had been trying to replace DOS since mid-'80s, with OS/2 (and of course NT, later on). They abandoned stand-alone MS-DOS in 1994. They abandoned Win9x in 2001 with XP (and support ended in 2006, as I'm sure you remember). With XP, they declared [MS-]DOS "dead".

But now even XP is dead. So are a bunch of older OS versions (Linux 2.4, Mac OS X PPC, OS/2 4.x, FreeBSD 6.x/7.x). So what?

> Even people still creating dos programs (like me then) wanted productivity.
> Getting rid of the 640k barrier and LFN was the norm (nearly all ran under
> win9x, even only to multitask dosboxes).

DJGPP 2.0 already supported all of that in 1996.

> Of course sb would whine that BP7 would generate smaller binaries, and
> pointed to some badly maintained LFN unit for LFN support and protected
> mode (that killed the 640kb limit, but not the 64k limit) as scapegoats.
> Then sb else would comment that TP6 generated even smaller binaries.
>
> Then sb would whip out TP4 and convert the results to .COM, finally sb
> wrote an application that did a bit of 32-bit register access and an LFN
> int in assembler, and proved that assembler was Turing complete. (not that
> anybody asked)

AFAIK, TP used generic MZ .EXEs, hence it was limited to 640 kb max .EXE size. So obviously size would matter if hitting that limit. But most people never did. (Not sure about BP7 DPMI stuff. NE? That probably had different limits.)

However, the quest for smallest size is indeed useless for extremely trivial programs. Then again, is optimization ever useless? And if so, when do you draw the line? How would you ever know if it's good enough?

> > Not even the same dialects or architectures!
>
> Programming was mostly go32v2. I don't think the win32 port existed yet,
> and dabbling with Linux on a 486 server.
>
> > (TP only and no AMD64.) And size usually mattered more back then than
> now.
>
> Not that much, not even in 1997 it mattered much.
>
> Yes, people still made rescue floppy discs back then and shoehorned
> something on it (yes, even I used UPX once), but one didn't center the
> whole world around it anymore.

Good for them for being apathetic (not!). Give them a medal.

What makes people so accepting of what they see being spewed out by compilers? How would they ever know if it's good enough? "Well, it's not gigabytes. Well, it still fits in RAM. Well, nobody complained!" Just pretend that it's already optimal, then you don't have to do anything.

> > I'm not saying size is very important in most cases. Obviously people
> don't
> > know, don't care, don't have time, "it just works", etc. But it's
> > ridiculous to pretend that it's somehow bad to actually identify and fix
> > the problem!
>
> That's the point. It is only a *PROBLEM* in minimalists minds. Even back
> then I had better things to do with my time. You only do something about
> size, if it is really, really prohibitive, not because of some underbelly
> feeling that it is not "right' (e.g. during a brief WINCE stint).

Then why does everyone (e.g. FreeBSD, Cygwin, SourceForge) compress every single binary package?

For that matter, why optimize anything at all by default? It's all vanity. Don't optimize unless needed. So everyone who uses default "gcc -g -O2" is obsessing over nothing?

Speed is obviously the same as size here, a useless trifle.

For that matter, why write portable code? It just takes longer. Obviously you have better things to do with your time.

> That's the whole issue here. Micromanaging size without a direct reason is
> an hobby. Sure, the proponents try to dress it up (I still have this old
> 8088 machine, I still deliver through 2400 baud lines, I spend my days
> creating bootdisks), but it is still the same.

Sure, "Hello, world!" is useless. So is worrying about 100 vs. 200 bytes. But that doesn't mean optimizations are useless. What do you think "optimize" means???

> Nobody is really planning any form of development traject. They just want
> to be confirmed that they reached the pinnacle of their personal quest, the
> smallest binary and demonstrate their "knowledge".

But it is real concrete "knowledge" as opposed to just blindly saying, "Good enough!" without any sort of measurement.

Resources are inherently limited. Computers are full of all kinds of arbitrary limits, some intentional, others not. Anything that pretends "virtually unlimited" is a liar. Just because you don't see it yourself doesn't mean it's not hidden away somewhere. And that also doesn't mean it's not a real problem.

> > (But presumably you're just complaining about the complainers, not those
> > actually doing work to fix this)
>
> The sadness of it all is, that despite my reluctance, I'm still probably
> doing more to fix this. At least in FPC, and I have been doing it for a
> long time now. It might be not enough in your eyes, but that doesn't mean
> that nothing is done, and that there is not an eye for the worst excesses.

I don't remember complaining here in this thread about FPC at all. We're just saying, in general, that most compilers are suboptimal. Just because you want to pretend it's not important or already good enough doesn't mean that's true. Sure, people can complain about anything, but half the time they actually have a point.

---
Know your limits.h

marcov(R)

19.07.2014, 16:42

@ Rugxulo
 

FreeBasic and FreePascal

> > > 1997? You seriously want to pretend that you're using the same (or
> even
> > > similar) setups as back then??
> >
> > No, but the comparisons were much the same.
>
> Not at all, it was a different world. We never thought we'd ever have (much
> less need) multi-core Gigahertz or Gigabytes of RAM or Terabytes of hard
> disk or 64-bit cpus at home. Back then, 32-bit / 4 GB max RAM / FAT32 was
> good enough for everyone.

And that was already enough to not care if a binary was 50kb or 1MB if it did something useful.

> > Though mostly TP/BP7, not TP5.5
> > which was nearly completely forgotten by then. It only came back into
> view
> > due to Borland setting it free.
>
> Just to compare to 1997's GCC for x86: 2.7.2.x only supported 386 and 486
> [sic], and -Os didn't exist yet. But it was still good enough for Quake!

I never used DJGPP. The only thing I used gcc for was to compile a kernel.

> > And despite being in the last kickings of the mainstream DOS age, the
> > writing was clearly on the wall.
>
> MS had been trying to replace DOS since mid-'80s, with OS/2 (and of course
> NT, later on). They abandoned stand-alone MS-DOS in 1994.

And in that period, together with an explosion in computer use (elder aunts getting PCs), the focus shifted from Dos to Windows. That was what I meant with writing on the wall. I didn't like it back then, OTOH I did like LFNs.

> They abandoned
> Win9x in 2001 with XP (and support ended in 2006, as I'm sure you
> remember). With XP, they declared [MS-]DOS "dead".

XP was the final nudge for holdouts (well, now the 64-bit migration hammers that home even harder). However during that period it was already clear that DOS' mainstream use was over.

> But now even XP is dead. So are a bunch of older OS versions (Linux 2.4,
> Mac OS X PPC, OS/2 4.x, FreeBSD 6.x/7.x). So what?

(Of those I sometimes still use OS X PPC)

> > Even people still creating dos programs (like me then) wanted
> productivity.
> > Getting rid of the 640k barrier and LFN was the norm (nearly all ran
> under
> > win9x, even only to multitask dosboxes).
>
> DJGPP 2.0 already supported all of that in 1996.

I never cared for DJGPP, except as FPC donor (think as/ld/make/gdb/extender). (or any gcc derivate for a MS platform)

> > Of course sb would whine that BP7 would generate smaller binaries, and
> > pointed to some badly maintained LFN unit for LFN support and protected
> > mode (that killed the 640kb limit, but not the 64k limit) as scapegoats.
> > Then sb else would comment that TP6 generated even smaller binaries.
> >
> > Then sb would whip out TP4 and convert the results to .COM, finally sb
> > wrote an application that did a bit of 32-bit register access and an LFN
> > int in assembler, and proved that assembler was Turing complete. (not
> that
> > anybody asked)
>
> AFAIK, TP used generic MZ .EXEs, hence it was limited to 640 kb max .EXE
> size.

Real mode TP's like 5.5 and 7 were 16-bit so had a 640k. Moreover it didn't have huge memmodel (or eq) so allocations were limited to 64k. Worse, it had a 64kb static data segment.

> (Not sure about BP7 DPMI stuff. NE? That probably had different
> limits.)

Code limits were pretty far away. Memory was better stretched but suffered from exhaustion of selectors (8192 max), but still for the above 64k limits.

> However, the quest for smallest size is indeed useless for extremely
> trivial programs. Then again, is optimization ever useless? And if so, when
> do you draw the line?

Noticable. 100 100-200kb files on my 2.5GB HDD back then were not noticable (compared to the 20-50kb they were in TP)

> How would you ever know if it's good enough?

That's the point. If you don't notice without deliberately taking a closer look, there is no difference.

> Good for them for being apathetic (not!). Give them a medal.

They don't care. Apathetic is something else, namely being overstimulated or weary and not being able to take action because of it.

> What makes people so accepting of what they see being spewed out by
> compilers?

Most simply consider it a given. But in this particular (size) case, they don't even care, within bounds.

> How would they ever know if it's good enough?

How do you?

> "Well, it's not
> gigabytes. Well, it still fits in RAM. Well, nobody complained!" Just
> pretend that it's already optimal, then you don't have to do anything.

Having hypothetical optimal situation isn't even on their radar. They want to do things. And even if they care about "optimal", they will focus their attentions on optimizing something they will actually notice.

> > That's the point. It is only a *PROBLEM* in minimalists minds. Even back
> > then I had better things to do with my time. You only do something about
> > size, if it is really, really prohibitive, not because of some
> underbelly
> > feeling that it is not "right' (e.g. during a brief WINCE stint).
>
> Then why does everyone (e.g. FreeBSD, Cygwin, SourceForge) compress every
> single binary package?

Server load. Less bytes moved. Easy on the connection (theirs and the dlers). Without visible downsides since reversable. But they do things in bulk, and connections have had practical limits much longer than HDD sizes

> For that matter, why optimize anything at all by default?

Because many you do notice, and come without much downsides. (except for the compiler builder). Size optimizing binaries by working the RTL or switching compilers is a wholly different reason.

Only a few extreme cases actually switch compilers purely because of performance.

> For that matter, why write portable code? It just takes longer. Obviously
> you have better things to do with your time.

That is certainly true, and I don't write portable code by default. E.g. my work apps are hopelessly windows only.

> Sure, "Hello, world!" is useless. So is worrying about 100 vs. 200 bytes.
> But that doesn't mean optimizations are useless. What do you think
> "optimize" means???

In general or in this case? The compiler optimizer achieves easy and preferably safe optimizations in speed or size.

But binary size comes more from library architecture and target specific factors than compiler codegeneration.

> > Nobody is really planning any form of development traject. They just
> want
> > to be confirmed that they reached the pinnacle of their personal quest,
> the
> > smallest binary and demonstrate their "knowledge".
>
> But it is real concrete "knowledge" as opposed to just blindly saying,
> "Good enough!" without any sort of measurement.

It is only concrete within their frame of mind sacrificing things other don't want to sacrifice. Locale, language awareness, exception, RTTI etc.

> Resources are inherently limited. Computers are full of all kinds of
> arbitrary limits, some intentional, others not. Anything that pretends
> "virtually unlimited" is a liar.

Of course. But you are talking about microoptimization *AFTER* the sane things have been done. THAT's the solution searching for a problem context.

> > doing more to fix this. At least in FPC, and I have been doing it for a
> > long time now. It might be not enough in your eyes, but that doesn't
> mean
> > that nothing is done, and that there is not an eye for the worst
> excesses.
>
> I don't remember complaining here in this thread about FPC at all.

1. look at the title.
2. you replied to my reply to w3* msg that was full of FPC vs TP55 comparisons based on size.

> We're
> just saying, in general, that most compilers are suboptimal.

I think something like FPC is fairly close to optimal. It could be more optimal, but the interest and the manpower lacks from that.

> Just because
> you want to pretend it's not important or already good enough doesn't mean
> that's true.

I say it hardly exists as real problem, and that it is mostly a hobby of a few people. And as additional evidence I present the fact that most of those people microoptimize and minimize old compilers instead of actually carrying responsibility in OSS projects.

> Sure, people can complain about anything, but half the time
> they actually have a point.

That's not my experience. I think complaining is as much to relieve tension and stress. But my opinion might be clouded by having spent a few years on an helpdesk.

glennmcc(R)

Homepage E-mail

North Jackson, Ohio (USA),
05.07.2014, 23:40

@ w3a537
 

FreeBasic and FreePascal

You'll have no such problem here on this board.

All of us are quite proud to still be using DOS and tiny 100% DOS programs. :)

> It really all started when somebody criticized me over my floppy drive.
>
> And the fact that my base DOS system can be copied to a floppy and it
> fits.
>
> I would provide it here on BTTR but I'm not sure about the licenses for
> all the programs.
>
> And I was criticized over my preference of small programs. Selecting small
> is a waste of time they said.
>
> And then somebody told me to buy a big USB stick.
>
> None of these had anything to do with FP, but they became major components
> of the thread.
>
> SB

---
--
http://glennmcc.org/

w3a537(R)

E-mail

Colorado Springs CO USA,
06.07.2014, 04:50

@ glennmcc
 

FreeBasic and FreePascal

You'll have no such problem here on this board.

All of us are quite proud to still be using DOS and tiny 100% DOS programs.

Thank you thank you.

SB

nickysn(R)

21.08.2014, 10:11

@ marcov
 

FreeBasic and FreePascal

> > I don't have even the slightest clue what you're referring to. But the
> > Internet is full of all types of cranks (ahem, comments on every YouTube
> > video ever), so it's probably just normal everyday angst. Don't sweat
> the
> > small stuff.
>
> Some people are simply oversensitive.
>
> Oh, and he apparently missed the post with the fpc 16-bit results:
>
> Using last fall's 16-bit snapshot (tiny memory model:)
>
> D:\pp16>file hw.com
> hw.com: DOS executable (COM)
>
> D:\pp16>dir hw.com
> 07/04/2014 10:49 16,528 hw.com

Current trunk is even better:

Hello world sizes in different memory models:

tiny (.com): 13978 bytes
tiny (.exe): 14266 bytes
small: 14190 bytes
medium: 18608 bytes
compact: 20090 bytes
large: 24780 bytes

fpctris sizes:

tiny (.com): 40660 bytes
tiny (.exe): 40948 bytes
small: 40868 bytes
medium: 50034 bytes
compact: 50144 bytes
large: 59598 bytes

And, of course, they will only get even smaller as the 16-bit code generator (and rtl) improves. I know it's still larger than TP7, but in terms of executable sizes, it's already pretty competitive to 16-bit C compilers such as Borland C. That being said, I don't see your patches that improve binary sizes. If, instead of working on the 16-bit target in the past year, I had complained on the forums that FPC does not support 16-bit DOS, everybody would have laughed at me and this target wouldn't exist.

> Rugxulo: -O3 won't do much on a program that only contains one statement
> (writeln('hello world');)
You probably have to compile the RTL also with -O3 in order to see its effect. There's also whole program optimization, which can further improve executable sizes:

http://wiki.freepascal.org/Whole_Program_Optimization

Laaca(R)

Homepage

Czech republic,
05.07.2014, 12:07

@ w3a537
 

FREE Basic & Pascal

Yes, the EXE files generated by Freepascal are rather big. However you can reduce the size.
My test results for "Hello world!" program.
(begin writeln('Hello world!');end.)

FPC 1.0.10 (static link+strip all debug info option): 61440 bytes
FPC 1.0.10 (smart link+strip all debug info option): 25600 bytes (can be compressed with "UPX --best" to 13740 bytes)

FPC 2.6.4 (static link+strip all debug info option): 140800(!) bytes
FPC 2.6.4 (smart link+strip all debug info option): 39424 bytes (can be compressed with "UPX --best to 18320 bytes)

Now what happens if we just for fun include another unit (in our example CLASSES).
(uses classes; begin writeln('Hello world!');end.)

FPC 1.0.10 (static,...): 217600 bytes
FPC 1.0.10 (smart,...): 97792 bytes

FPC 2.6.4 (static,...): 481792(!!) bytes
FPC 2.6.4 (smart,...): 216064 bytes

Conclusion:
1) The size of FPC binary is caused by automaticaly included set of basic units (unit System, ev. also ObjPas). Between FPC 1.0.10 and FPC 2.6.4 came many new features, procedures and functions into these units so the their size remarkably increased.
->If the size is critical, use FPC 1.0.10

2) The same effect appears with every other unit included.
->If the size is critical, don't include another units due few trivial functions and code the necessary things yourself.

3) Smart linking helps a lot. It tries to include into generated EXE only touched parts from units and strip the unused parts from units.
The more internal complicated the unit is the worse result from smart linking you can expect.
-> Good to know when creating own units.

4) It does not follow from testing example above but the optimalization setting (Level 0-2, size/speed code generation) has only very small influence on the generated size. (in both: FPC 1.0.10 and 2.6.4)


However -- on bigger projects these size differencies are less remarkable.
I tried several compilation of my testing program (loader and viewer) for various font files.
It uses quite a lot of my own code but only very small bits from Crt, DOS, Objects and Go32 units.

FPC 1.0.10 static: 222152 (L2) or 229816 (L0)
smart: 189440

FPC 2.6.4 static 317952 (L2) or 318976 (L0)
smart: 105984 (!!!)

The superiority of 2.6.4 smartlinking if suprising in light of previous tests.
Perhaps has 2.6.4 better dead code removal algorithm.

---
DOS-u-akbar!

marcov(R)

05.07.2014, 14:02

@ Laaca
 

FREE Basic & Pascal

> Now what happens if we just for fun include another unit (in our example
> CLASSES).
> (uses classes; begin writeln('Hello world!');end.)

> FPC 1.0.10 (smart,...): 97792 bytes
> FPC 2.6.4 (smart,...): 216064 bytes

Make sense, 2.x series has much more delphi features.

> 1) The size of FPC binary is caused by automaticaly included set of basic
> units (unit System, ev. also ObjPas). Between FPC 1.0.10 and FPC 2.6.4 came
> many new features, procedures and functions into these units so the their
> size remarkably increased.
> ->If the size is critical, use FPC 1.0.10

I would use the 16-bit compiler. It is new and generates smaller binaries afaik. fpctris is 50k. Speaking of which, seems that large memmodel is now also largely done,

> FPC 1.0.10 static: 222152 (L2) or 229816 (L0)
> smart: 189440
>
> FPC 2.6.4 static 317952 (L2) or 318976 (L0)
> smart: 105984 (!!!)
>
> The superiority of 2.6.4 smartlinking if suprising in light of previous
> tests.
> Perhaps has 2.6.4 better dead code removal algorithm.

I've no real explanation for this (specially the magnitude of the differences)

Afaik some of the tables of the main program were optimized to be more smartlinkable (tables with unit initialization, resourcestrings etc). Maybe it has some knock-on effect in your case.

Of course the entire codegenerator was rewritten, so it could be that differences slipped in (e.g. minor mods that creates a finer division in sections in certain cases). Maybe the division into sections was altered during the 2.x cg rewrite, but that wouldn't be announced as feature.

Last but not least, I assume LD got an update somewhere along the way. It might be worth testing with 2.7.x too, since IIRC 2.7.x has an internal linker for go32v2.

Rugxulo(R)

Homepage

Usono,
09.07.2014, 01:48

@ marcov
 

FREE Basic & Pascal

> Last but not least, I assume LD got an update somewhere along the way.

Nope, FPC 2.6.2 GO32V2 is still using old 2.17 from 2008, COFF only. According to GCC 4.9.0's online manual, the whole --gc-sections stuff only works with ELF and GNU BinUtils or Solaris' linker. It doesn't even work for PE/COFF (tier two; see here), which is 1000x more popular than DJGPP (tier nada).

This is why things like this happen: mkdir() pulling in ctime.o

> It might be worth testing with 2.7.x too, since IIRC 2.7.x has
> an internal linker for go32v2.

The daily snapshot of GO32V2 is still broken. It's still incorrectly compiled, so it doesn't work at all (-iSO -iTO -iSP -iTP = "win32 win32 i386 i386"). You might want to ping Pierre about that.

marcov(R)

10.07.2014, 11:29

@ Rugxulo
 

FREE Basic & Pascal

> > Last but not least, I assume LD got an update somewhere along the way.
>
> Nope, FPC 2.6.2 GO32V2 is still using old 2.17 from 2008,

Laaca was comparing with FPC 1.0.10 which is from 2003.

> COFF only.
> According to GCC 4.9.0's
> online
> manual, the whole --gc-sections stuff only works with ELF and GNU
> BinUtils or Solaris' linker. It doesn't even work for PE/COFF (tier two;
> see here),
> which is 1000x more popular than DJGPP (tier nada).


True, but since Go32v2 (and I assume DJGPP) programs are smaller, you can go the "generate one assembler file per symbol, and then AR them together" way.

FPC 1.x did that, and yes it is a bit slow, but not that bad on Go32v2. On windows it was worse due to the header files, but that is probably related to FPC's auto generation of import libraries, and might be different for mingw.

FPC 1.9 and higher did the AS and AR step internally for (our) TIER 1 minus OS X. FPC 2.2.x features internal linkers for win32,wince and win64, though not all mingw libraries were linkable with the internal linker, that was only resolved with 2.6.0.

> This is why things like this happen:
> mkdir()
> pulling in ctime.o

Wrong. It happened because nobody created proper section smartlinking for those binary targets.

A library programmer can't be required to assume responsibility for each and every dead or sleeping target because he wants to change a couple of lines. One might as well cease development of the library code.

That being said we try to avoid making global decisions on FPC without testing them on both windows, OS X and Linux. This to avoid unixisms creeping in.

> > It might be worth testing with 2.7.x too, since IIRC 2.7.x has
> > an internal linker for go32v2.
>
> The daily snapshot of GO32V2 is still broken. It's still incorrectly
> compiled, so it doesn't work at all (-iSO -iTO -iSP -iTP = "win32 win32
> i386 i386"). You might want to ping Pierre about that.

If I talk to him, I will pass it on, but I don't talk/mail to him that often. He's not very active in the last few months, so he might be aware of it, simply not having the time.

Note that the internal linker should make it easier to create snapshots on non-dos, though probably still an AS is needed for the startup code. (but caching that near immutable file and replacing it with an dummy AS might work)

Oso2k(R)

10.07.2014, 20:02

@ marcov
 

FREE Basic & Pascal

ASIC BASIC for DOS could create a Hello World com in 360 bytes (according to Wikipedia)[1]. I think one could get that down to about 32 bytes if you only used ASM.

[1] http://en.wikipedia.org/wiki/ASIC_programming_language

Rugxulo(R)

Homepage

Usono,
12.07.2014, 04:38

@ marcov
 

FREE Basic & Pascal

> > > Last but not least, I assume LD got an update somewhere along the way.
> >
> > Nope, FPC 2.6.2 GO32V2 is still using old 2.17 from 2008,
>
> Laaca was comparing with FPC 1.0.10 which is from 2003.

1.0.10? Verboden!

I don't know if much has majorly changed since then (2.8??) and now (2.17). I honestly don't think it would make a difference at all!

I do know that Juan increased / fixed the max reloc limit circa 2.22-ish, and latest is 2.24r2. DJ is still COFF maintainer for us, but I don't know if he's done any "major" work on it in a billion years.

It seems that GNU (BinUtils, GCC) mostly only care for ELF with partial concern for other (tier two) platforms (Mac OS X: Mach-O, Windows: PE/COFF). DJGPP gets very little, if anything, from them. Even online LD changelog doesn't say "DJGPP" at all anywhere. Plus, MIPS ECOFF was removed in 2.24. (Similarly, GCC dropped "generic COFF" back in 4.5.x days, IIRC.)

BTW, I remember saying FPC didn't work with 2.19, but obviously nobody (nor I) ever fixed that nor delved deeper. So FPC may not work with newer ones. Ironically, I think the only impetus to upgrade BinUtils for DJGPP is probably the opposite: newer GCCs won't work with older ones.

> True, but since Go32v2 (and I assume DJGPP) programs are smaller, you can
> go the "generate one assembler file per symbol, and then AR them together"
> way.

I don't know if FPC GO32V1 is smaller output than FPC GO32V2.

DJGPP v1 was smaller than v2 because it had a separate GO32.EXE extender as well as lesser support (no LFNs, no symlinks, less POSIX, etc).

> FPC 1.x did that, and yes it is a bit slow, but not that bad on Go32v2.

Untested, but I still assume old FPC 1.x is faster just by virtue of doing less.

> FPC 1.9 and higher did the AS and AR step internally for (our) TIER 1 minus
> OS X.

What is FPC tier one? Linux + Windows on i386 or AMD64?

> Wrong. It happened because nobody created proper section smartlinking for
> those binary targets.

32-bit flat memory model (and virtual memory) implied that everything was practically "unlimited". So nobody cared about a few more megabytes. Besides, AFAIK, DJGPP was never commercially funded. It's always been very few volunteers. So the urge was even less, even back when Win9x was ubiquitous.

> A library programmer can't be required to assume responsibility for each
> and every dead or sleeping target because he wants to change a couple of
> lines. One might as well cease development of the library code.

First priority is just to get it working. But to imply that one isn't responsible for one's own code footprint is a bit naive. Just because nobody cares or takes initiative doesn't mean it's not worth doing.

> That being said we try to avoid making global decisions on FPC without
> testing them on both windows, OS X and Linux. This to avoid unixisms
> creeping in.
>
> Note that the internal linker should make it easier to create snapshots on
> non-dos, though probably still an AS is needed for the startup code. (but
> caching that near immutable file and replacing it with an dummy AS might
> work)

"Wanted: AS for bootup" :-P

marcov(R)

12.07.2014, 14:43

@ Rugxulo
 

FREE Basic & Pascal

> > > > Last but not least, I assume LD got an update somewhere along the
> way.
> > >
> > > Nope, FPC 2.6.2 GO32V2 is still using old 2.17 from 2008,
> >
> > Laaca was comparing with FPC 1.0.10 which is from 2003.
>
> 1.0.10? Verboden!

Despite one of the most longlived versions, it was doomed from the start. Basically with 1.0.6 core pretty much stopped workin on the 1.x branch, the differences between 1.0.6 are mostly critical fixes and user contributions.

I also used it of course, since by then trunk IDE was still unusable (though the compiler was better).

> It seems that GNU (BinUtils, GCC) mostly only care for ELF with partial
> concern for other (tier two) platforms (Mac OS X: Mach-O, Windows:
> PE/COFF).

People mostly work on the ELF targets yes. Afaik most of the formerly cygnus projects are under stewardship of RedHat. (gcc,gdb, probably also binutils)

Windows lags too, though not as bad as ten years ago, now it is mostly functional.

Afaik there is no functional GNU binutils for OS X. Only for the OSS version, Darwin, and that is a minority target. (probably held up by a few passionate users). Apple binutils (pre LLVM) were not based on GNU, but own development on top of early 90's BSD tools.

> DJGPP gets very little, if anything, from them.

That's POV. One might as well say DJGPP gives very little, if anything, to them, apparently.

> Even online LD
> changelog doesn't say "DJGPP" at all anywhere. Plus, MIPS ECOFF was removed
> in 2.24. (Similarly, GCC dropped "generic COFF" back in 4.5.x days, IIRC.)

I assume the few *nix COFF targets have retired by now, or is that a console game binary format?

> BTW, I remember saying FPC didn't work with 2.19, but obviously nobody (nor
> I) ever fixed that nor delved deeper. So FPC may not work with newer ones.
> Ironically, I think the only impetus to upgrade BinUtils for DJGPP is
> probably the opposite: newer GCCs won't work with older ones.

Such things might be a problem with djgpp-fpc combined linking. But I hope the internal linker will long term fix a lot of the small hurts, as well as that the linker is automatically available on each target.

> > True, but since Go32v2 (and I assume DJGPP) programs are smaller, you
> can
> > go the "generate one assembler file per symbol, and then AR them
> together"
> > way.
>
> I don't know if FPC GO32V1 is smaller output than FPC GO32V2.

I meant vs windows where the number of symbols quickly goes through the roof due to the windows headers.

> Untested, but I still assume old FPC 1.x is faster just by virtue of doing
> less.

It was, but not in all scenarios.

> > FPC 1.9 and higher did the AS and AR step internally for (our) TIER 1
> minus
> > OS X.
>
> What is FPC tier one? Linux + Windows on i386 or AMD64?

Not really a official, fixed policy. Currently I'm release manager, and I usually wait till Linux,Windows and OS X are uploaded and confirmed working by a few users/devels.

If OS X is hit by a major version transition it sometimes uploaded later(but usually more during the RC than the final release).

But usually FreeBSD (because I package it, and it rarely breaks because it is so similar to Linux and has a handful of users) and OS/2 (Tomas) are quick too.

Dos is also quite often slow to upload with the Release candidate, but typically done by Tomas or Pierre as a matter of routine for the final release. Dos (I should say go32v2 now, to avoid future ambiguity with "msdos" which is the 16-bit target) is however quite often hit by last minute changes and repackages. More than any of the other targets.

Note that linux means a .tar.gz, and not the distribution formats (.deb, .rpm) in recent times those are *always* late.

> > Wrong. It happened because nobody created proper section smartlinking
> for those binary targets.
>
> 32-bit flat memory model (and virtual memory) implied that everything was
> practically "unlimited". So nobody cared about a few more megabytes.

Hmm. As you know I'm no binary size maniac, but even that is a bit simplistic. DPMI has tighter limits than Linux, and Linux/ELF _got_ section smartlinking

> Besides, AFAIK, DJGPP was never commercially funded. It's always been very
> few volunteers. So the urge was even less, even back when Win9x was
> ubiquitous.

Yes. A big difference. We are in the same boat.

> > A library programmer can't be required to assume responsibility for each
> > and every dead or sleeping target because he wants to change a couple of
> > lines. One might as well cease development of the library code.
>
> First priority is just to get it working. But to imply that one isn't
> responsible for one's own code footprint is a bit naive. Just because
> nobody cares or takes initiative doesn't mean it's not worth doing.

A library programmer can't be responsible for a target not holding up its own belt and not implementing features to actually reduce footprint. In real bad cases you might convince him to do a workaround, but for the little amounts it's simply "too bad, fix your target!".

> > Note that the internal linker should make it easier to create snapshots
> > on non-dos, though probably still an AS is needed for the startup code.
> > (but caching that near immutable file and replacing it with an dummy AS
> > might work)
>
> "Wanted: AS for bootup" :-P

Wanted: restructure bootup code to pascal.

That means it an still contain asm, but handled by the compiler's internal assembler. That probably that needs binary format assembler pseudo instructions added to the compiler's internal assembler though.

Currently only Linux has this (has the most architectures).

w3a537(R)

E-mail

Colorado Springs CO USA,
05.07.2014, 18:53

@ Laaca
 

FREE Basic & Pascal

Simply choose the best compiler for the project at hand.

I asked them about FP and they said 175k but it would
have been better if they had given me a number with all
the switches set first.

I was ready to just go away but then blah blah blah.

SB

w3a537(R)

E-mail

Colorado Springs CO USA,
05.07.2014, 19:06

@ w3a537
 

FREE Basic & Pascal

And I now I plan to install Puppy for DOS.

You can stick it in a ZIP, put the ZIP on a ramdisk, kick-start Puppy
from a BAT file, UNZIP it, run TINY.EXE to load LINUX and run LINUX to
your hearts content, exit LINUX and you are back in DOS again.

This is what the readme says anyway.

SB

RayeR(R)

Homepage

CZ,
08.07.2014, 17:08

@ w3a537
 

FREE Basic & Pascal

> You can stick it in a ZIP, put the ZIP on a ramdisk, kick-start Puppy
> from a BAT file, UNZIP it, run TINY.EXE to load LINUX and run LINUX to
> your hearts content, exit LINUX and you are back in DOS again.

Interesting, I didn't hear about this possibility yet. Are you really mean transition from Linux back to DOS without a reboot? Is it a special feature of Puppy only? I commonly use loadlin or linld to boot Linux (Debian) from DOS but then I need reboot to get back...

---
DOS gives me freedom to unlimited HW access.

w3a537(R)

E-mail

Colorado Springs CO USA,
08.07.2014, 22:38

@ RayeR
 

FREE Basic & Pascal

That's what the README says. I have not installed this package yet.
If I do have to reboot it's not a big deal.

SB

DOS386(R)

27.10.2014, 02:40

@ w3a537
 

FREE-Baysic & FREE-Pascal

FreeBasic 1.0 is out :-D (maybe broken, I'll test)

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

Rugxulo(R)

Homepage

Usono,
01.11.2014, 20:22

@ DOS386
 

FREE-Baysic & FREE-Pascal

> FreeBasic 1.0 is out :-D (maybe broken, I'll test)

According to the forum thread New DOS Ver 1.00.0 from a month ago, it is buggy due to some backslash path mixup. There are Git builds if you want to try newer snapshots (but I haven't personally): http://users.freebasic-portal.de/stw/builds/

A very minimal test in raw native FreeDOS shows it seems to work fine, at least for simple stuff. Except if I mistakenly try to enable LFNs via DOSLFN, then it chokes.

BTW, not as criticism, but just as commentary, here's some details:

FreeBASIC-1.00.0-dos.zip = 6,523,010 bytes packed = 1630 files = 17,371,513 bytes unpacked. Of those files, 937 (807,141 bytes) are .bas examples, 3 are runtime *.o (10,877 bytes), 9 are *.a (3,848,234 bytes), and 6 are .EXE (9,483,264 bytes with GDB.EXE taking up half of that).

I also compiled "Hello, world!" with fb, fblite, and qb dialects. Same size for all: 89,088 bytes (or 47,232 UPX'd). As a very simple comparison, when I compiled my Befunge-93 interpreter (ultra simple, thus console only), with only truly minimal changes, it was ("fb") 202,240 bytes (89,244), ("fblite") 142,336 (72,276), and ("qb") 180,736 (89,136).

DOS386(R)

10.11.2014, 04:07

@ Rugxulo
 

FREE-Baysic & FREE-Pascal

> According to the forum thread it is buggy due to some backslash path

The DOS version still works in DOS (EDR-DOS or FreeDOS), but the Win32 version stopped working on ME !!!

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

Rugxulo(R)

Homepage

Usono,
11.11.2014, 01:10

@ DOS386
 

FREE-Baysic & FREE-Pascal

> > According to the forum thread it is buggy due to some backslash path
>
> The DOS version still works in DOS (EDR-DOS or FreeDOS), but the Win32
> version stopped working on ME !!!

What broke, the installer? Or FBC.EXE won't even run/compile anymore? What about "-gen gcc"?

What functionality did you use it for before? Win32-specific stuff that isn't similarly supported in the DOS version? "Just use DJGPP!" :-D :-P

I would probably consider this a bug, but considering that (for years, 2006+) developers (and associated compiler tools) have long abandoned Win9x and even XP (also EOL'd), there's very little chance of anyone caring enough to fix this for you. (Nor me, but mostly because I'm clueless about Windows.)

P.S. I need to (re)test latest DOS snapshot (Nov. 7), but I doubt much changed in sizes. Also, I might've incorrectly reported size for "fblite", not that it really matters. BTW, this thread is way too long as it is, so it should probably be split off when something like FBC 1.0.1 comes out. Oh, BTW, interesting exercise ... does Win32 version run under latest ReactOS?

Back to index page
Thread view  Board view
15192 Postings in 1365 Threads, 250 registered users, 14 users online (0 registered, 14 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum