The idea of a LanguageComparisonFramework, seemed obvious. Lots of pages debate language issues without seemingly any kind of consistency or systematization. language qualities are therefore in the eyes of the beholder. For the Python aficionado, the EconomyOfExecution is obviously unimportant, for the SmugLispWeenies EconomyOfExpression cannot trump the regularity of structuring everything around the EssExpressions.
So let's enumerate them, and maybe dedicate them a page on their own. I start with Cardelli's framework from his initial BadEngineeringPropertiesOfOoLanguages. And I took the liberty of refining some and adding some.
Some of them need future refinements. EconomyOfSmallScaleDevelopment and EconomyOfLargeScaleDevelopment are kind of fuzzy. Is on the other hand a trait like EconomyOfCompilation still relevant in the days of Ghz, Gb development workstations? How about EconomyOfExecution?
Since in my humble opinion EconomyOfLargeScaleDevelopment and EconomyOfSmallScaleDevelopment are somewhat fuzzy and subjective, we should look for more modular units that drive the economies both at large scale and smaller scale.
Perhaps, we might also add...
With regards to the above maybe we can gather some realistic LanguageTestCases. Some tentative have already been provided here on wiki. One is OddWordProblem together OddWordProblemSolutions. However that problem is to some degree somewhat trivial.
An ongoing year after year LanguageTestCase is provided within IcfpProgrammingContest.
Another very interesting example I came across recently is EnumeratingRegularLanguages. And it is very interesting because it shuttered some of the prejudices I had against HaskellLanguage and my over-confidence that for any such task SchemeLanguage was the way to go for me (haskell is vastly more complicated and with a steep learning curve, and subject to loss of memory while Scheme seems to be like learning to swim or learning to bicycle, you never forget that). -- CostinCozianu
I had run into a similar problem - noticing how most features of a language were aimed at common goals. In evaluating a language, remember that one shouldn't evaluate the strict text representation of the language, but the actual tools a developer uses. If a language is primarily handled through code generators, it is those code generators that should be assessed.
Very often when faced with a problem in any given language, you'll end up thinking "I can do that, but it will break OnceAndOnlyOnce", "I can do that, but it's hideous", or "I can do that, but it breaks type safety", or even worse, YouCantGetThereFromHere. As such, I thought I'd try and describe the yardsticks to measure a language by to see how they compare.
The core goals of language/development-platform design should be to maximize these following attributes:
Related issues:
I've tried to make this list as direct as possible. The exercise is to think of a language, think of its flaws (why you don't use it), and consider which of these attributes it fails to properly satisfy. For example, most AlgolFamily users would contend that LispLanguage fails abysmally on (2) and (5), due to the bizarre (oversimplified?) syntax and standard libraries that simply can't compare to the modern behemoths of DotNet and the like. On the other hand, the whole Lisp family whups all its opponents at OnceAndOnlyOnce and some forms of Evaluatability. The perfect language would be one that excels at all 8 attributes for any given task.
But not necessarily. This is not the intent of the page. Having a coherent and relevant LanguageComparisonFramework together with some objective analyses done over various languages in the spirit of Cardelli's famous BadEngineeringPropertiesOfOoLanguages, can be a substantial help to software engineers, both experienced and novices. The very idea is to avoid LanguagePissingMatch.