What is the known performance limit for the compiler on current hardware?

Is there a real reason to use dynamic links and binary distributions these days?

For binary distributions, there is an alternative of distributing everything in source and allowing the platform to choose whether to compile binaries or not. But whether it can be used or not depends on how well computers today can compile from source.

Dynamic linking is relevant as it allows you to distribute libraries in binaries as well.

So how good is the compiler performance? With or without optimization? What can be done to get the best performance from the compiler?

-2


a source to share


7 replies


the gentoo Linux distribution does just that. We are close to the point where it becomes cheaper to distribute the source instead of the binaries. Currently, there are several problems that need to be addressed:

  • You need a compiler first. This should always be an application as binary, so 100% the original system will never work. But this is really a problem.
  • Compilation is slow, even today. Compilers are getting better, processors are getting faster, and tools such as make

    allow you to compile code in parallel. But it is still slower than copying the file from the installation media to disk. Much slower.
  • Modern languages ​​(aka Scripting Languages) are usually compiled on the fly. It solves this problem at the expense of execution speed. But they are getting better. In a couple of years they will catch up. At the end of the day, this is just a CPU limit on how many optimizations you can do in a scripting language.
  • Companies don't want to give away source code.
  • To compile something from source, you need everything it depends on, so you need to compile them too, even if you don't really need them. Imagine an image processing program. It can read many different file formats. Do you really want to compile all the libraries for every exotic image format out there before you can start installing the program itself?


OSS solves problem # 3. gentoo solves # 4. Right now, we're just stuck at # 2. Today's processors are too slow to run something like games or MS Office from source.

+2


a source


Your question is a little unclear, but it seems to be about two subjects that are not related. Breaking components into dynamic and static linking libraries makes assembly faster in many cases. However, dynamic and static libraries were not invented for this purpose. They were invented to provide reusable components between functions and programs. Don't compile faster.



+4


a source


There are real reasons to provide binary distribution of software. Diverting attention to the business problems of obfuscating your software by compiling, so the proprietary logic is not provided very clearly, it makes it easier for the user the program is intended for.

I would hate getting big software packages like GCC, Gnome, PHP and a million others in their original format if I didn't develop that software. Even on my quad core machine, compilation takes time. I'd rather just move some binary blobs around.

Also remember that (at least for Linux systems) creating binary distributions allows for consistent and stable systems that have been tested. Building binary distributions are the best directly translated verified software configurations for the user.

Given that many JIT / Interpreted languages ​​run at about 1/2 the speed of C (roughly speaking, I'm sure some are better), I would rather have software code than see everything written in Java / C #. Especially when I don't need to see the code. Ignore source distributions and compilation on demand. As a user (and developer) RPM / .debs is much easier.

So the answers are "Is there a real reason for people to use dynamic links and binary distributions these days?" The dynamic link library problem is not really a problem. Runtime character resolution does not degrade performance. How do projects like Apache and many others relate to module architecture otherwise? (Hey, they can always have a built-in compiler / interpreter, linker, loader and do it manually! Shiver)

the software was compiled once and used a heck of a lot

As for speeding up compilers, it depends on the semantics of the compiled language and on the rough work of the analysis. You can write a very fast C compiler, but the code may not be optimal and therefore run slower and have more memory. Considering the software is compiled once and runs frequently, I would prefer the software take 1 hour longer to compile, but save that time faster. But that doesn't matter, because we have binary distributions.

+2


a source


Three things come straight to mind when reading your question:

  • Dynamic linking is not related to binary distribution.

  • You just can't use good compile-time optimizations if you want to compile as quickly as possible. (That is, make a fast compiler, remove every optimization)

  • JIT compilers seem to be able to strike a good trade-off between execution speed and compilation speed, but the code they run is deployed and distributed as binary as there are still some optimizations that can be done on first compilation (the most expensive ones) and therefore, that you really don't want to have a complete toolchain on every computer to be able to distribute the source code.

+1


a source


Dynamic linking also allows you to detect installed components at runtime, which are useful in at least two situations:

  • The application supports licensed features that may or may not be present with a given product installation.
  • The app supports a plugin architecture where a third party can create components for it.
+1


a source


When recompiling a project with a single binary target, typically only files that have changed since the last compilations are rebuilt before linking, with the result that individual binary targets should improve overall compilation time, but only marginally.

0


a source


It depends on the type of project you are building.

Some word editor that waits 100 times more input than computation doesn't need binary, but realtime game certainly needs the last bit of speed, especially those with ai players.

And there are a few more cases: operating systems, graphics editors (you don't want to wait 5 minutes to process the effect), simulators, research. Thus, most of the interesting things that can be done with a computer require speed at runtime and not much time to compile.

0


a source







All Articles