Difference between revisions of "Size Matters/fr"
(→Est-ce mauvais quand les binaires sont gros ?)
(→distribution par modem)
|Line 53:||Line 53:|
=== distribution par modem ===
=== distribution par modem ===
modem "" "shareware possible", .. sites via . 56k5 minutes.
argument opinion . point, modem application (').
=== Concours ===
=== Concours ===
Revision as of 12:46, 30 August 2020
- 1 Introduction
- 1.1 Règle générale, que sont les tailles réalistes actuels pour les binaires FPC/Lazarus ?
- 1.2 Pourquoi les binaires sont si gros ?
- 1.3 Est-ce mauvais quand les binaires sont gros ?
- 1.4 Configuration de compilateur incorrecte
- 1.5 Problèmes 2.2.0
- 1.6 UPX
- 1.7 Coûts de cadre d'application (Framework)
- 2 Attentes irréalistes
- 3 Analyse de diverses options
- 4 Voir aussi
Cette page porte sur les tailles de binaires. Au fil des années il y a eu beaucoup de confusion sur les tailles binaires FPC et Lazarus. Avant de faire des remarques dans les listes de diffusion, SVP, lisez cette FAQ.
La principale raison pour cette FAQ est que la plupart des discussions sur ce sujet ont tendance à tomber trop rapidement dans les détails. Aussi, l'opinion de gens qui crient "envahit" presque tout et obscurcit l'image globale souvent plus qu'il ne contribue à la clarté.
Règle générale, que sont les tailles réalistes actuels pour les binaires FPC/Lazarus ?
- Tout ce qui est en dessous de 1 Mo n'est pas un problème.
- Assurez-vous qu'ils sont correctement dépouillés (stripped) et liés intelligemment avant la mesure et que TOUTES les bibliothèques sont construites en utilisant la liaison intelligente.
- NE PAS FAIRE DE binaires UPX par habitude, a moins d'avoir de très bonnes raisons de le faire (voir en dessous). La taille est moins importante que la charge mémoire qu'un binaire décompressé provoque. La mémoire est plus chère que l'espace disque. La plupart des outils d'accès à distance implémente la compression dans leur tunnel.
- Avec de petites apps, c'est plus dur à estimer. C'est parce que la taille exacte de la RTL dépend de l'OS. With small apps it is a bit harder to estimate. This is because the exact RTL size is OS dependent. Toutefois, 100ko de binaire autonome qui fait quelque chose peut être fait en général en dessous de 50ko.
- Sous windows 20k d'IHM utilisant des binaires ne sont pas un problème.
- L'unité SysUtils contient l'internationalisation, des messages d'erreur en texte, la gestion des exceptions et d'autres trucs qui sont toujours liés quand cette unité est employée (penser à un total de 40-100k).
- Les apps Lazarus sur Windows font environ 500ko, mais grossissent rapidement à 1,5 Mo et de plus en plus avec les widgets Lazarus utilisés. Les apps binaires Lazarus peuvent passer à plus de 100 Mo quand les infos de débogage sont liées dedans (comparable aux infos TD32 dans Delphi).
- C'est un peu plus que lorsqu'on recompile avec une ancienne version de Delphi, et un peu moins qu'avec les versions modernes de Delphi (D2009+ la taille minimale de la RTL bondit fortement), c'est le prix de pour la compatibilité croisée de plate-forme et la maintenabilité des projets.
- Quand le moment est atteint l'ajout de code supplémentaire n'introduit pas de nouvelles dépendances, cette croissance rapide disparaît.
- Le point des 1,5Mo du dessus est une règle d'or. Il dépend beaucoup du style de création de votre IHM et du nombre de différents Widgets que vous utilisez et de leur complexité.
- Pour les applications Lazarus, une partie du binaire n'est pas du code, la plupart sont des chaînes et des tables.
- Les simples binaires Linux/FreeBSD sont plus gros que les GCC correspondants. C'est parce qu'ils n'utilisent pas de bibliothèques partagées (ce que vous pouvez facilement voir
- Les binaires 64 bits sont toujours plus gros que leurs équivalents x86. En général, les plates-formes RISC génèrent aussi des binaires légèrement plus gros.
Pourquoi les binaires sont si gros ?
Réponse: Ils ne sont pas censés être grand.
Si vous les percevez comme gros, alors
- soit vous n'avez pas configuré FPC proprement, ou
- vous avez une attente irréaliste de la taille que devrait avoir le binaire
- vous essayer de faire quelque chose pour laquelle FPC n'est pas conçu.
Le dernier est sans le moins probable des trois. Je vais traiter ces trois cas rapidement dans les prochains paragraphes.
Est-ce mauvais quand les binaires sont gros ?
Bien, cela dépend de l'ampleur du parcours. Mais il est sûr de dire que presque personne ne devrait être inquiet d'avoir des binaires gros de quelques Mo ou même plus de 10 Mo pour des applications considérables.
Cependant, il y a encore quelques catégories qui pourraient vouloir avoir un certain contrôle sur les fichiers binaires en les gardant petits.
- Le monde de la programmation embarquée évidemment (et alors je ne veux pas dire les PC embarqués qui ont toujours des dizaines de Mo).
- les gens qui distribuent vraiment quotidiennement par modem
- Concours, mesures de performance
Notez qu'une idée fausse souvent citée est celle que des binaires plus gros sont plus lents en fonctionnement. En général, cela n'est pas vrai, 'exotic last-cycle stuff as code cachelines'(non traduit) mis à part.
Alors que Free Pascal est raisonnablement utilisable pour des usages embarqués ou de système, les décisions d'ingénierie de la version finale et de compromis sont basées sur les exigences de construction d'applications plus générales, bien que certaines des cibles les plus intégrées (comme DOS 16 bits ou ARM/AVR/MIPS (= PIC32) étendent cela à la limite.
Si vous avez des besoins aussi spécialisés pour des objectifs plus réguliers, vous pouvez définir un projet dans l'ombre (quelque chose comme les versions spécialisées de certaines distros Linux qui sont disponibles). Ennuyer l'équipe FPC qui est déjà surchargée avec ces besoins spécifiques n'est pas une option, d'autant plus que la moitié des utilisateurs embarqués sérieux roulent de toutes façons.
distribution par modem
Le cas du modem n'est pas seu1ement "télécharger depuis Internet" ou "mon shareware doit être aussi petit que possible", mais p.ex. dans mon ancien travail nous faisions un tas de déploiement vers nos clients et vers nos sites externes via le bureau distant sur le RNIS. Mais même avec un modem 56k, vous ne pouvez pas envoyer 1 Mo en moins de 5 minutes.
Veillez à ne pas abuser de cet argument pour essayer de fournir un fondement rationnel mal placé pour une opinion émotionnelle sur la taille binaire. Si vous faites ce point, cela ne sert à rien sans une analyse statistique approfondie du pourcentage d'utilisateurs de modem réels que vous avez pour votre application (ma plupart des utilisateurs de logiciels ne téléchargent pas depuis Internet mais depuis les CD des magazines de shareware (NdT: ce point de vue n'est plus valbale actuellement).
Une autre raison pour garder les binaires petits sont les comparaison de langages (comme le Language Shootout). Cependant ceci est plus comme résoudre un puzzle, et pas vraiment lié au génie logiciel responsable.
Configuration de compilateur incorrecte
I'm not going to go explain every aspect of the compiler configuration at great length, since this is a FAQ, not a manual. This is meant as an overview only. Read manuals, and buildfaq thoroughly for more background info.
Generally, there are several reasons why a binary might be bigger than expected. This FAQ covers the most common reasons, in descending order of likelihood:
- The binary still contains debug information.
- The binary was not (fully) smartlinked
- The binary includes units containing initialization sections that execute a lot of code.
- You link in complete (external) libraries statically, rather than using shared linking.
- Optimization is not (entirely) turned on.
- The Lazarus project file (lpr) has package units in its uses section (this is done automagically by Lazarus)
In the future, shared linking to a FPC and/or Lazarus runtime library might significantly alter this picture. Of course then you will have to distribute a big DLL with lots of other stuff in it which will give you versioning issues. This is all still some time in the future, so it is hard to quantify what the impact on binary sizes would be. Especially because dynamic linking also has size overhead (on top of unused code in the shared library).
Information de débogage
Free Pascal uses GDB as debugger and LD as linker. These work with a system of in-binary debuginfo, in the older stabs or newer dwarf format. People often see e.g. Lazarus binaries that are 40MB. The correct size should be about 6MB, the rest is debug info (and maybe 6 MB from not smartlinking properly).
Stabs debuginfo is quite bulky, but has the advantage that it is relatively independent of the binary format. It has been replaced by DWARF except on some legacy platforms.
There is often confusion with respect to the debug info, which is caused by the internal strip in a lot of win32 versions of the binutils. Also some versions of the win32 strip binary don't fully strip the debug info generated by FPC. So people toggle a (Lazarus/IDE or FPC commandline) flag such as -Xs and assume it worked, while it didn't. FPC has been adapted to remedy this.
So, when in doubt, always try to strip manually, and, on Windows, preferably with several different STRIP binaries.
This kind of problem probably got rarer especially on Windows, since the internal linker provides a more consistent treatment of these problems. However they may apply to people using more exotic targets for quite some time to come.
You can use the whole strip system to ship the same build as the (stripped) user version while retaining the debug version (unstripped) for e.g. interpreting traceback addresses. So if you do formal releases, retain a copy of the unstripped binary that you ship, and always do a release build with debug info.
The design of GDB itself lets you keep and use debug information out of the binary file (external debug information), in a separate .dbg file. The size of resulting binary is not increased due to debug information, and you can still successfully debug the binary. You don't need the .dbg file to run and use the application, it is used only by the debugger. Since all debug information has been removed from the binary file, you will not get much effect if you try to strip it.
To compile your application in this way, you should use the -Xg switch or corresponding the Lazarus GUI option: Project|Compiler Options|Linking|Debugging|Leave generating debugging info enabled and enable use External gdb debug symbols.
A blank form application for Win32, compiled with external debug information would occupy about 1 Mb, and .dbg file would be 10 Mb.
(main article: File size and smartlinking)
The fundamental smartlinking principle is simple and well known: don't link in what is not used. This of course has a good effect on binary size.
However the compiler is merely a program, and doesn't have a magic crystal ball to see what is used, so the base implementation is more like this
- The compiler divides up the code into so-called "sections" which are fairly small.
- Then the linker determines what sections are used using the rule "if no label in that section is referenced, it can be removed".
There are some problems with this simplistic view:
- virtual methods may be implicitly called via their VMTs. The GNU linker can't trace call sequences through these VMTs, so they must all be linked in;
- tables for resource strings reference every string constant, and thus all string constants are linked in (one reason for sysutils being big).
- symbols that can be called from the outside of the binary (this is possible for non-library ELF binaries too) must be kept. This last limitation is necessary to avoid stripping exported functions from shared libraries.
- Another such pain point are published functions and properties, which have to be kept. References to published functions/properties can be constructed on the fly using string operations, and the compiler can't trace them. This is one of the downsides of reflection.
- Published properties and methods can be resolved by creating the symbol names using string manipulation, and must therefore be linked in if the class is referenced anywhere. Published code might in turn call private/protected/public code and thus a fairly large inclusion.
Another important side effect that is logical (but often forgotten) is that this algorithm will link in everything referenced in the initialization and finalization parts of units, even if no functionality from those units are used. So be careful what you USE.
Anyway, most problems using smartlinking stem from the fact that for the smallest result FPC generally requires "compile with smartlinking" to be on WHEN COMPILING EACH AND EVERY UNIT, EVEN THE RTL
The reason for this is simple. LD only could "smart" link units that were the size of an entire .o file until fairly recently. This means that for each symbol a separate .o file must be crafted. (and then these tens of thousands of .o files are archived in .a files). This is a time (and linker memory) consuming task, thus it is optional, and is only turned on for release versions, not for snapshots. Often people having problems with smartlinking use a snapshot that contains RTL/FCL etc that aren't compiled with smartlinking on. The only solution is to recompile the source with smartlinking (-CX) on. See buildfaq for more info.
In the future this will be improved when the compiler emits smartlinked code by default, at least for the main targets. This will be made possible by two separate developments. First, the GNU linker LD now can smartlink more finely grained (at least on Unix) using --gc-sections; secondly the arrival of the FPC internal linker (in the 2.1.1 branch) for all working Windows platforms (wince/win32/win64). The smartlinking using LD --gc-sections still has a lot of problems because the exact assembler layout and numerous details with respect to tables must be researched, we often run into the typical problem with GNU development software here, the tools are barely tested (or sometimes not even implemented, see the DWARF standard) outside what GCC uses/stresses. Moreover, versions for non *nix targets are often based on older versions (think dos, go32v2, amiga here).
The internal linker can now smartlink Lazarus (17 seconds for a full smartlink on my Athlon64 3700+ using about 250MB memory) which is quite good, but is Windows only and 2.1.1 for now. The internal linker also opens the door to more advanced smartlinking that requires Pascal specific knowledge, like leaving out unused virtual methods (20% code size on Lazarus examples, 5% on the Lazarus IDE as a rough first estimate), and being smarter about unused resource strings. This is all still in alpha, and the statistics above are probably too optimistic, since Lazarus is not working with these optimizations yet.
Sections initialization et finalization
If you include a unit in USES section, even when USES'd indirectly via a different unit, then IF the unit contains initialization or finalization sections, that code and its dependencies is always linked in.
A unit for which this is important is sysutils. As per Delphi compatibility, sysutils converts runtime errors to exceptions with a textual message. All the strings in sysutils together are a bit bulky. There is nothing that can be done about this, except removing a lot of initialisation from sysutils that would make it Delphi incompatible. So this is more something for an embedded release, if such a team would ever volunteer.
(main article: Lazarus/FPC Libraries)
One can also make fully static binaries on any OS, incorporating all libraries into the binary. This is usually done to ease deployment, but produces huge binaries as tradeoff consequence. Since this is wizard territory I only mention this for the sake of completeness. People who do this hopefully know what they are doing.
Instead of making static binaries, many programmers do dynamic linking / shared linking. This _CAN_ generate a much, much smaller binary executable. However there are also cases where the binary gets bigger, specially on architectures like x86_64 where PIC is on by default. Dynamic linking (win32) and shared linking (*nix) are the same concept, but their internal workings differ, as can be easily seen by the fact that *nix systems need the shared libraries on the host to (cross-)link, and when linking a Windows binary you don't need the relevant .dlls on the system.
Optimization can also shave off a bit of code size. Optimized code is usually tighter. (but only tenths of a percent) Make sure you use -O3. See also Whole Program Optimization for further code size reduction.
Fichier lpr de Lazarus
In Lazarus, if you add a package to your project/form you get its registration unit added to the lpr file. The lpr file is not normally opened. If you want to edit it, first open it (via project -> view source). Then remove all the unnecessary units (Interfaces, Forms, and YOUR FORM units are the only required ones, anything else is useless there, but make sure you don't delete units that register things such as image readers (jpeg) or testcases).
You can save up to megabytes AND some linking dependencies too if you use big packages (such as glscene).
This kind of behaviour is typical for libraries that do a lot in the initialization sections of units. Note that it doesn't matter where they are used (.lpr or a normal unit). Of course smartlinking tries to minimize this effect.
There appear to be some size problems in FPC 2.2.0 is this still relevant for 2.6.x/2.7.x? Note that these remarks hold for the default setup with internal linker enabled.
- It seems that FPC 2.2.0 doesn't strip if any -g option is used to compile the main program. This contrary to earlier versions where -Xs had priority over -g
- It seems that FPC 2.2.0 doesn't always smartlink when crosscompiling. This can be problematic when compiling for windows, not only because of size, but also because dependencies are created to functions that might not exist.
Note: UPX support in makefiles, and the distribution of upx by FPC ceased after 2.6.0. New releases of FPC won't package upx any more
The whole strange UPX cult originates mostly from a mindless pursuit of minimal binary sizes. In reality UPX is a tool with advantages and disadvantages.
The advantages are:
- The decompression is easy for the user because it is self-contained
- Some size savings are made if (and only if) the size criterion is based on the binary size itself (as happens in demo contests). However, especially in the lowest classes it might be worthwhile to minimize the RTL manually and to code your compression yourself, because you can probably get the decompression code much tighter for binaries that don't stress all aspects of the binary format.
- For rarely used applications or applications run from removable media the disk space saving may outweigh the performance/memory penalties.
The disadvantages are:
- worse compression (and also the decompression engine must be factored into _EACH_ binary) by archivers (like ZIP) and setup creation tools
- decompression occurs on every execution, which introduces a startup delay.
- Since Windows XP and later now feature a built-in decompressor for ZIP, the whole point of SFX goes away a bit.
- UPXed binaries are increasingly being fingered by the malware heuristics of popular antivirus and mail-filtering apps.
- An internally compressed binary can't be memorymapped by the OS, and must be loaded in its entirety. This means that the entire binary size is loaded into VM space (memory+swap), including resources.
- You introduce another component (UPX, decompression stub) that can cause incompatibilities and problems.
The memorymapping point needs some explanation: With normal binaries under Windows, all unused code remains in the .EXE, which is why Windows binaries are locked while running. Code is paged in 4k (8k on 64-bit) at a time as needed, and under low memory conditions is simply discarded (because it can be reloaded from the binary at any time). This also applies to graphic and string resources.
A compressed binary must usually be decompressed in its entirety, to avoid badly affecting the compression ratio. So Windows has to decompress the whole binary on startup, and page the unused pages to the system swap, where they rot unused, and also take up extra swap space.
Coûts de cadre d'application (Framework)
A framework greatly decreases the amount of work to develop an application.
This comes however at a cost, because a framework is not a mere library, but more a whole subsystem that deals with interfacing to the outside world. A framework is designed for a set of applications that can access a lot of functionality, (even if a single application might not).
However the more functionality a framework can access, the bigger a certain minimal subset becomes. Think of internationalization, resource support, translation environments (translation without recompilation), meaning error messages for basic exceptions etc. This is the so called framework overhead.
This size of empty applications is not caused by compiler inefficiencies, but by framework overhead. The compiler will remove unused code automatically, but not all code can be removed automatically. The design of the framework determines what code the compiler will be able to remove at compile time.
Some frameworks cause very little overhead, some cause a lot of overhead. Expected binary sizes for empty applications on well known frameworks:
- No framework (RTL only): +/- 25kb
- No framework (RTL+sysutils only): +/- 100-125kb
- MSEGUI: +/- 600kb
- Lazarus LCL: +/- 1000kb
- Free Vision: +/- 100kb
- Key Objects Library: +/- 50kb
In short, choose your framework well. A powerful framework can save you lots of time, but, if space is tight, a smaller framework might be a better choice. But be sure you really need that smaller size. A lot of amateurs routinely select the smallest framework, and end up with unmaintainable applications and quit. It is also no fun having to maintain applications in multiple frameworks for a few kb.
Note that e.g. the Lazarus framework is relatively heavy due to use of RTTI/introspection for its streaming mechanisms, not (only) due to source-size . RTTI makes more code reachable, degrading smartlinking performance.
A lot of people simply look at the size of a binary and scream bloat!. When you try to argue with them, they hide behind comparisons (but TP only produces...), they never really say 'why' they need the binary to be smaller at all costs. Some of them don't even realise that 32-bit code is ALWAYS bigger than 16-bit code, or that OS independence comes at a price, or ...,or ..., or...
As said earlier, with the current HD sizes, there is not that much reason to keep binaries extremely small. FPC binaries being 10, 50 or even 100% larger than compilers of the previous millenium shouldn't matter much. A good indicator that these views are pretty emotional and unfounded is the overuse of UPX (see above), which is a typical sign of binary-size madness, since technically it doesn't make much sense.
So where is this emotion coming from then? Is it just resisting change, or being control-freaks? I never saw much justified cause, except that sometimes some of them were pushing their own inferior libraries, and tried to gain ground against well established libs based on size arguments. But this doesn't explain all cases, so I think the binary size thing is really the last "640k should be enough for anybody" artefact. Even though not real, but just mental.
A dead giveaway for that is that the number of realistic patches in this field is near zero, if not zero. It's all maillist discussion only, and trivial RTL mods that hardly gain anything, and seriously hamper making real applications and compatibility (and I'm not a compatibility freak to begin with). Nobody sits down for a few days and makes a thorough investigation and comes up with patches. There are no cut down RTLs externally maintained, no patch sets etc, while it would be extremely easy. Somehow people are only after the last byte if it is easy to achieve, or if they have something "less bloated" to promote.
Note that the above paragraph is still true, nearly five years after writing it.
Anyway, the few embedded people I know that use FPC intensively all have their own customized cut back libraries. For one person internationalization matters even when embedded (because he talks a language with accents), and exceptions do not, for somebody else requirements are different again. Each one has its own tradeoffs and choices, and if space is 'really' tight, you don't compromise to use the general release distro.
And yes, FPC could use some improvements here and there. But those shouldn't hurt the "general programming", the multiplatform nature of FPC, the ease of use and be realistic in manpower requirements. Complex things take time. Global optimizers don't fall from the sky readily made.
Comparaisons avec GCC
Somewhat less unrealistic are comparisons with GCC. Even the developers mirror themselves (and FPC) routinely against GCC. Of course GCC is a corporate sponsored behemoth, that is also the Open Source's world favorite. Not all comparisons are reasonable or fair. Even compilers that base themselves on GCC don't support all heavily sponsored "c" gcc's functionality.
Nevertheless, considering the differences in project size, FPC does a surprisingly good job. Speed is OK, except maybe for some cases of heavily scientific calculating, binary sizes and memory use are sufficient or even better in general, the number of platforms doesn't disappoint (though it is a pity that 'real' embedded targets are missing).
Another issue here is that FreePascal generally statically links (because it is not ABI stable and would be unlikely to be on the target system already even if it was) its own RTL. GCC dynamically links against system libraries. This makes very small (in terms of source size) programs made with fpc have significantly larger binaries than those made with GCC. It's worth mentioning here, that the binary size has nothing to do with the memory footprint of the program. FPC is usually much better in this regard than GCC.
Still, I think that considering the resources, FPC is doing extraordinarily well.
Comparaisons avec Delphi
In comparisons with Delphi one should keep in mind that 32-bit Delphi's design originates in the period that a lot of people DIDN'T even have Pentium-I's, and the developer that had 32MB RAM was a lucky one. Moreover Delphi was not designed to be portable.
Considering this, Delphi scaled pretty well, though there is always room for improvement, and readjustments that correct historical problems and tradeoffs. (It is a pretty well known fact that a lot of assembler routines in newer Delphi's were slower than their Pascal equivalents, because they were never updated for newer processors. It is said that has only been corrected since Delphi 2006.)
Still, slowly on the compiler front, FPC isn't Delphi's poor cousin anymore. The comparisons are head-on, and FPC 2.1.1 winning over Delphi is slowly getting the rule, and not the exception anymore.
Of course that is only the base compiler. In other fields there is still enough work to do, though the internal linker helps a lot. The debugger won't be fun though :-) Also in the language interoperability (C++, Obj C, JNI) and shared libraries is lots of work to do, even within the base system.
Comparaisons avec .NET/Java
Be very careful with comparisons to these JIT compiled systems: JITed programs have different benchmark characteristics, and also extrapolating results from benchmarks to full programs is different.
While a JIT can do a great job sometimes (specially in small programs that mostly consist out of a single tight loop), but this good result often doesn't scale. Overall my experience is that statically compiled code is usually faster in most code that is not mainly bound by some highly optimizable tight loop, despite the numerous claims otherwise on the net.
Note that since 2007, Java 6 suddenly caused a significant jump in the Java shootout-ratings, and starts touching the bottom of normal native compilers. This shows that one must be very careful echoing sentiments on the web (both positive and negative) and stick to own measuring, with the border conditions trimmed to the application domain that you are in.
Analyse de diverses options
Tests on Lazarus 0.9.29 with FPC 2.4 (FPC 2.2.4 with Windows).
Optimized compiler means:
- 1. Project|Compiler Options|Code|Smart Linkable (-CX) -> Checked
- 2. Project|Compiler Options|Linking|Debugging| Uncheck all except
Strip Symbols From Executable (-Xs) -> Checked
- 3. Project|Compiler Options|Linking|Link Style|Link Smart (-XX) -> Checked
The most important items seem to be 2. For a simple application the executable size should now be 1-3 MB instead of 15-20 MB.
- 4. (Optional) Project|Compiler Options|Code|Optimizations|smaller rather than faster -> Checked (Warning: this might decrease performance)
Default Lazarus means as installed from package/setup.
LCL without debug information mean after rebuilding Lazarus IDE and LCL without debug information (-g-).
|Default Lazarus||LCL without debug information|
|Ubuntu 64 bits / Lazarus 64 bits|
|Default application||13,4 Mb||7,5 Mb / 8|
|Optimized compiler||4,4 Mb||2,70 Mb (0.29svn FPC2.4 2,5)|
|Ubuntu 32 bits / Lazarus 32 bits|
|Default application||19,6 Mb||5,7 Mb|
|Optimized compiler||2,9 Mb||1,6 Mb|
|Windows XP 32 bits / Lazarus 32 bits|
|Default application||11,8 Mb||2,14 Mb|
|Optimized compiler||1,62 Mb||1,50 Mb|
|Windows Seven 64 bits / Lazarus 64 bits|
|Default application||12,3 Mb||3,20 Mb|
|Optimized compiler||2,14 Mb||2,16 Mb|