Most of this "Size Matters" (Size Doesn't Matter) page is a bunch of "In My Opinion" rubbish nonsense. This article needs an author, because it is highly biased and very opinionated. It is definitely not the view of all FPC programmers - but it makes it look like it is, by placing no author at the bottom of the page. I can sort of guess who wrote the article though, because it is an opinion of many FPC developers.
First of all - let's get a few things straight. What is FPC good for? Why would anyone use FPC over Delphi? For linux. For BSD. What is linux and BSD good for? Server applications. What does a server application entail? Bandwidth.
Size is important in CGI programs and large huge scale servers. Guess what - if I have a shared server with 500 CGI programs on it from all sorts of users, then eventually those 300MB FPC executables are going to start affecting bandwidth costs and hard drive costs.
Big hard drives are very nice - but guess what - a big hard drive is extremely hard to SCAN and FIX errors. A big hard drive requires much more time to defragment. A big hard drive is extremely hard to back up - since it takes DAYS rather than hours to back up.
So my question is - if size doesn't matter - Why use FPC? Why not use .NET or Java?
Think about what FPC is useful for. FPC is not useful for windows GUI programs (despite what Lazarus team wants you to believe). FPC is a very niche market that must support the niche market - your niche market is CGI and server programs, systems administration programs, and embedded devices. Delphi cannot create BSD or Linux CGI programs, or embedded software. Delphi can create GUI programs.
CGI programs, embedded software, and systems tools are small. Big GUI programs are big. FPC is not a GUI generator.
Find your niche market. Discover what people are using FPC for in the real world. I'm not talking about those hobbyist people that are using FPC to make Kylix like GUI applications for linux. That market is dead. If linux GUI programs were really what FPC was used for - then I could totally agree that having a 300KB app on ONE PERSON's 200GB hard drive isn't a big deal.
But you try and debug a server with 500 bloatware 3MB programs and then we'll start talking. You want FPC to be taken seriously in the systems world? You want it to be taken seriously in the web world? Then start focusing on that world - because let me repeat - no one is using FPC to create GUI applications. By no one, I mean ask yourself why not just use Delphi. I know there is Pixel. One GUI application. But I'll bet you most people use FPC where Delphi does not shine. Delphi does not shine in the server market. The bandwidth market.
I'm not saying that FPC must create 20KB systems and cgi programs. I'm saying that a 1MB CGI program that loads HELLO WORLD is not acceptable. I realize that FPC can still create fairly small systems programs - and this is good. The current state of FPC is not ridiculous. But it is heading that way - with the attitude I see, like "size never matters". It's funny that in the article someone mentions that "FPC 2.1.1 beats delphi". Huh? What does it matter - speed matters? Beats delphi in what? So speed matters but size doesn't? All of a sudden speed can matter but size is really not important? That's like saying that size matters, but speed doesn't.
So basically the article is rubbish - because it claims that fpc 2.1.1 is going to beat the pants off delphi, while at the same time it says that size aren't really important because FPC is geared toward application development. Applications don't require speed - hint, hint - most of software applications that run on the desktop are IDLE 90 percent of the time. Speed is important in file searching, web server, and systems stuff. Is FPC for systems programming and server programming, or is it for desktop software programming? It seems that some folks think FPC is a great application development tool. I've never seen FPC used as such. I see it being used on the system, on the server, and in niche areas like gameboy, embedded devices. I don't see very many good GUI programs coming from FPC. Nor do I know of anyone that uses very many desktop programs any more - since everyone is so dumb to use simple things called web browsers.
Yes, web browser GUI's suck - but they are good enough. If FPC keeps focusing on the desktop software application market, then FPC has no developers. Because people already have good tools to make desktop software applications - and over 90 percent of desktop software applications run on MS Windows - so no one needs to have a portable compiler to compile their GUI programs on BSD and Linux. What people do need, is a systems, database, and server compiler - which is mainly what FPC has been/should be used for in the real world.
- Most of this "Talk:Size Matters" (Size Does Matter in some cases) page is a bunch of "In My Opinion" rubbish nonsense. This article needs an author, because it is highly biased and very opinionated. It is definitely not the view of all FPC programmers - but it makes it look like it is, by placing no author at the bottom of the page. Vincent 09:27, 21 November 2006 (CET)
- You simply miss several points:
- CGI (20 kB up) programs written in FPC are usually very small because they don't depend on the lcl.
- If you've really a lot of different CGI applications using all FPC simply compile the rtl, fcl, lcl into a shared library. I did this and it works. Why don't we do this by default: it would simply increase the memory footprint of single applications. A FPC rtl compiled as shared lib is around 4 MB. So a simple hello world would cause a 4 MB (!) footprint, lazarus applications will be much worse. So for the common FPC application this makes no sense. Usual users run only a handfull FPC applications at once so shared linking (which causes also 10% slowdown on i386 machines) makes for the default FPC installation no sense.
- Shared lib hell. You would always need a matching set of shared libs for your application. If you ever tried to install 3rd party applications on linux you might know what I'am talking about.
- --FPK 19:49, 10 December 2006 (CET)
First, the page is mine (Marcov). Most developers and IRC regulars have expressed support though.
I don't mind signing the article, the non signing was not on purpose, though I'd rather like the article to be discussed on content than on author.
- FPC minimal apps (and thus CGIs) are more 100kb, but I still wouldn't care if they were 1MB. You exactly state the minimalistic without clue philosophy that the faq warns for. These "markets" you describe don't exist except in the mind of a few tinkerers that grow over it eventually. We tried to limit the binary size because of the TP comparisons for years, and it never was enough (and never realistic in the first place). All the people that whined over it in the early years eventually went with Delphi when they grew up, and generated bigger binaries. Nobody stays in the niche, and embedded users ALSO go for productivity and usability first, and size second. ((flash) memory is awfully cheap nowadays. You could keep your entire CGI example in memory for under the commercial hourly rate of a single programmer)
- The article already treats the embedded case, and warns about catering general purpose FPC distro for embedded use.
- "FPC over Delphi" is not a real tradeoff. One can also use both. One could also argue "why pascal over PHP for web development" which would be equally sense. And people use FPC and Lazarus for both. (and a lot more purposes than just these two).
- Bandwidth has nothing to do with binary size. Code bloat and binary size are often linked, but not always.
- any harddisk sold in the last 10 years (that includes microdrives in PDAs) is larger than 300MB.
- As said in the article. The so called "bloat" in FPC is not linear. There is a one of size (which is the result of a compromise in usability and size that was carefully crafted by a dozen knowledgable developers in over a decade, and it changed over time)
- So the only real impact would be startup time of the cgi. This is mitigated by several factors:
- Most importantly, modern OSes only map used code into physical memory.
- FPC doesn't use libc by default, no costly dynlinker step.
- Most webservers that are performance oriented implement some "fastcgi" option that doesn't respawn binaries at all.
- I don't defrag harddrives, unless FAT32 under Plain dos and win9x. NT and Linux have improved fat drivers that don't fragment that much. So it only exercises the drive, and only improves a bit of burst performance right after defragmenting, but not much in the long run. IMHO defragging belongs in the UPX category too :-)
- FPC has its niches, and it grows in these niches. Admitted, a large part is scraps (specialised use) from the Delphi community, but still. The point is that FPC's non educational use, though modest, still grows considerably every year (and the number of contributors likewise). Wish I could say the same about Delphi. Apparantly, our tradeoffs interest Delphi users.
As an endconclusion to the original poster on this talk page: The page is mainly against opinioned last byte mentality. I provide arguments, magnitude estimates etc. I expect the same of the opposition. At least provide real world projects _with_ all border conditions that prove your thesis. My claims that HDs are sub Eur100 per so and so much GB is easily checkable, what can you really put against it?
Marcov 20:46, 10 December 2006 (CET)
Personaly I consider programming an engineering triangle between size, maintainability, and speed. Sacrificing size or speed for maintainability is a valid trade-off to make, but so is the other way around. Excessive executables are IMHO a matter of bad engineering, regardless wether it causes actual trouble or not. There is always room for better engineering, so there is room to engineer for smaller exes. But is there currently a real problem? IMO not, FPC does a fine job regarding size. Daniel-fpc 22:58, 10 December 2006 (CET)