C language is defined to be portable but it is not portable enough to allow code to be compiled and executed on variety of platforms without modifications.
Actually there is no common solution for this problem. Almost always the preprocessor is used to tailor the code for compilation on a particular platform. But several mechanisms are used to drive the preprocessor through the preprocessor kludges placed into the source code.
makefile
sections solutionThis is one of the first portability solutions. The person who was compiling a package had to select a section according to the platform he was using by editing the makefile
file. After doing so he could issue simple make
command to compile the code and if he is lucky, the program compiles and runs without problems. The amount of luck increased rapidly when the GNU/Linux operating system emerged as it is so greatly configurable that it is no longer possible to write a makefile
section for each possible configuration.
This solution is unusable for large packages using multiple directories, because the resulting makefile
would be so huge that it may become unmaintable. Another drawback of this solution is the disability to support cross platform builds.
autoconf
solutionThe autoconf
solution is more advanced version of the makefile
sections solution. The idea behind autoconf
is that the developer describes the requirements of the program rather than a list of configurations for different platforms. A script is used to check the program requirements, determine how to fulfill them and constructing the makefile
section according to the findings of the script. This allows the programmer to forget about creating large and error-prone sets of configuration items and decreases the "amount of luck" required to compile a package as a large number of small tests tend to adapt better as small number of monolithic configuration blocks.
The autoconf
is a suite that simplifies the descriptions of the program requirements and automatizes the generation of the checking script. The script is called configure
. It is an usually huge shell script that is generated automatically from a (much smaller) description of program requirements created by the developer of the package.
Before the package can be built, one must call the script called configure
. The script scans the platform it is running on for the properties and adjusts the compilation process accordingly by substituting some platform specific values into the templates of various files that are needed to be tailored to that specific platform. This process usually generates a set of makefile
s that are used by the make
utility to build the software. Sometimes (especially for packages with a big list of requirements) a config.h
file is also created. This file controls the preprocessor kludges by defining or undefining various preprocessor macros. Also the makefile
s can define/undefine macros by passing apropiate commandline options to the compiler.
The actual source code valid for the platform is built by the preprocessor which applies the preprocessor macro kludges on the rest of the sourcecode and passes the result to the compiler. Especially in the older software these preprocessor macros are used so heavily that the result is pretty unreadable source code of the affected software.
autoconf
?In fact there is nothing wrong with the autoconf
suite itself. It is a great solution for the various portability problems and I personally compiled many software on my box without difficulties. Actually I found the sources that use autoconf
to be so reliable than I prefer compiling from source to installing binaries (I occasionally run into problems when trying some binary distributions of some software due to library compatibility problems or similar issues but when the sources of that packages use autoconf
, they are able adapt on my somewhat nonstandard GNU/Linux box).
The problem is with placement of the autoconf
code. The platform portability tests (namely the program configure
) and the feature usage library for the features which vary from platform to platform are placed into every package that needs them. This introduces problems like synchronization of the platform specific stuff versions, which are somewhat solved by packages like autoconf
but are not solved at all (the actual platform dependent stuff, which is supplied by the programmer himself, fully suffers from all these issues).
Another one of the big problems with this placement is that it does not support cross-platform builds very well. In such a case the configure
cannot "touch" the destination platform to determine the exact values of the system dependent variables. So it must provide a guess for them and it may be possible that this guess is wrong. And even when the guess is right, the build system on the build platform may be unable to cope with it. The probability of configure
being wrong with its guess settings increases greatly when cross-compilation between different processors (e.g. building MacOS binary on a x86) or even platforms (building Windows binary on Linux) is attempted. This probability is so high that trying to cross-compile medium sized software packages almost always introduces subtle bugs that cannot be catched by the tests accompanying the package.
Also the platform dependency code is error prone, because there is no way to automatically check whether a particular requirement description is valid and makes sense. There is limited check support in autoconf
that can detect for example overquoted macro declarations but the language used (the M4 macro processor language) is not declarative and thus does not offer robustness known from declarative strongly-typed programming languages such as the Borland Pascal dialect of Pascal.
All this adds burden to the developer which I recognize as unnecessary. Most package developers are not educated enough in the platform difference fields to be able to properly maintain the library of platform specific solutions required by their packages.
To get a better image of the impact of all these duplicity problems mentioned above, imagine two game packages, for example "Rocks And Diamonds" by Holger Schemel and "PowerManga" by TLK Games.
Both games need to update the screen in real time, both need some kind of joystick input (either "true" joystick or the keyboard emulation) and some kind of time synchronisation so the game will not run awfully fast on modern computers with their fast (micro)processors.
However all these actions are carried out differently on GNU/Linux, on Windows or on MacOS. So before we can to update the screen, to grab the input or to synchronize the timings, we need to know on which platform we are located first and then choose the apropiate methods how to do these actions. And this is the job for configure
.
Currently both games carry their own configure
scripts which check the platform and decide how to update the screen etc. When an user wants to compile and install both games on his machine, the checks for the screen update method etc. are done twice, because there are two configure
scripts which don't communicate with each other. This is a waste of computer power and time (an average configure
script needs a minute to determine everything that is required for the compilation and prepare the results for use). After both games are installed, the code to do screen updates is located twice on the hard drive, one copy in "PowerManga" and another copy in "Rocks And Diamonds".
If Holger Schemel discovers a bug in the screen updating code for Windows and updates "Rocks And Diamonds" to workaround it, the "PowerManga" may be still suffering from that bug until TLK Games learns about it and fixes "PowerManga".
Imagine now that the current versions of these games are unacceptably slow so we need 2GHz+ machine to play them. If Holger Schemel discovers how to fasten the screen updates on Windows by using some weird tricks (and creates new version of "Rocks And Diamonds", which under Windows needs only 200MHz+ CPU) and TLK Games at the same time discover similar tricks for GNU/Linux (so new PowerManga version will require 300MHz+ CPU on GNU/Linux), we will end up with two games that both contain some platform specific improvements but one cannot benefit from the knowledge of the other. And when a third party person knows how to fasten things under MacOS and uses his knowledge in his software, none of our two games will benefit from his knowledge at all.
Since Holger Schemel is a different person from TLK Games, he uses different development methods and thinks about software differently. So if both, Holger Schemel and TLK Games finally discover, how to fasten screen things under Windows, it is highly probable that their implementations of these improvements in their games will be different. The result is two software packages with two different solutions of the same problem. When there are only two packages, this is not so big problem. But currently there are tens of action games out there and (since all action games need a fast way to update the screen) there are tens of solutions of the same problem (the fast screen update) in them.
Both, Holger Schemel and TLK Games need to investigate new platforms and implement ports of their games on them if they want their games running on them. This need takes the development power away from them so they are developing the games slower than they could. The same holds for the developers of the remaining action games in the world.
If someone gets an idea about new game, he must learn about at least one platform deeply to be able to start writing it. After he finished he still must keep an eye on the platforms and their ports to be able to support popular platforms that users are demanding.
The net result of this all is scattered developer power and lower cooperation.
OSHS uses (or plans to use) a modified version of the autoconf
suite. But it does not place the platform specific stuff into each OSHS package. Instead all platform specific stuff is gathered into one package which provides the kernel access and emulation library (currently named SYSLIB but I consider to rename it to KERNEL). From the software's view only a consistent OSHS interface is visible - even if the actual operating system on the target machine is GNU/Linux or Windows (there are means to determine the actual underlying platform but the applications are not entitled to use them; they are reserved for use by system tools that need this knowledge such as compilers). It is not allowed to change the interface of the kernel without the acceptance of the OSHS kernel development community. However the implementation of the interface does not need a specific approval. Strictly speaking if an implementation implements the OSHS interface precisely, it is automatically approved and the OSHS sofware can use the implementation just like any other implementation of the OSHS kernel.
This solution centralizes the platform differences into one point. This allow the experts in portability solutions to work together and communicate with each other about the portability without worrying about other developers which are in portability not aware enough to keep things going under multiple platforms. Also the experts can concentrate solely to the work of bringing the OSHS interface onto various platforms and their power is not taken away with an unrelated software which needs to cooperate with different platforms.
This also allows other developers to develop OSHS software without gaining enough experience about portability and related problems. This brings the art of OSHS programming to more hands and it may result more OSHS software than would be possible using the traditional approach of placing platform ports into each package.
And last but not least the ability to do cross-platform builds is greatly improved. The build system does not need to know what is available on the target platform nor it needs to make some guesses about platform specific variables. All that the build system needs to know is how to generate the code for the target platform and how to link to the OSHS kernel library under that platform. Period. This knowledge can be embedded into the build system much easier and will work without errors because the problem with inexact or incorrect guesses moved away. The price for this is that the kernel emulation library cannot be cross-built but this is much smaller price than having every package to cope with platform differencies.
To see the advantages of this centralisation imagine that both, Holger Schemel and TLK Games had their games ported to OSHS. Imagine that they ported them before they discovered the ways how to speed the screen update stuff in the games so their first OSHS versions of the games need 2GHz+ CPU to run.
First of all, since OSHS software can run at each platform where its kernel interface is running, they can remove their platform dependent stuff from their games and merge it into the OSHS kernel library. Once this work is done they can forget about any platform differences and concetrate to the games themselves. Another advantage is that the source code of both games is much more readable, because the all the platform dependency kludges disappeared from the games (they moved into OSHS kernel library).
When Holger Schemel discovers the weird tricks how to fasten the screen update procedure in Windows, he will not place this improvement directly into "Rocks And Diamonds". He will place them into the OSHS kernel library, because this is the place where such improvements belong. His game will automatically benefit from them, because it already uses the OSHS kernel library to do the screen updates. He don't need to change even a comma on the source code of his game, so the source code of the game is still so readable than before.
But the benefits don't end here. Imagine that Holger Schemel does not keep his screen update improvements for himself but, as a good hacker, he sends them to the OSHS kernel library team so they will be included in the next release. Now when TLK Games upgrades their OSHS kernel library copy, their game also gets a performance boost on Windows. They don't need to invest nor invent anything to get this done.
When TLK Games later discovers how to fasten screen updates under GNU/Linux, they also update the OSHS kernel library instead of the game itself. They also are good hackers, so they send their improvements to the OSHS kernel library team so they appear in the next release. Now when Holger Schemel updates his copy of OSHS kernel library, he gets the GNU/Linux speed improvements made by TLK Games as well. The result is that "Rocks And Diamonds" now suddenly perform well under GNU/Linux without any specific changes made by Holger Schemel.
When someone knows how to do fast screen updates, joystick input and timing synchronisation on BeOS, he can put his knowledge into the OSHS kernel library (or ask the OSHS kernel library team to do so). After this BeOS knowledge is in place, both "Rocks And Diamonds" and "PowerManga" are now able to function under BeOS - even when Holger Schemel or TLK Games never heard about BeOS. They don't need to update their games because they use the OSHS kernel library interface, which is the same on Linux, on Windows and on BeOS.
And finally, Holger Schemel and TLK Games are not required to investigate new platforms and implement ports of their games on them so all their development power goes to their games. They can leave the work around the platforms to the OSHS kernel development team and the people that are interested in platforms much more.