I would have expected their approach to be even faster in comparison than what they measured, but it could just be that they didn't do serious performance analysis and tuning. Think of const-only or functional programming, without any assignment statements.
The two major downsides would be extra complexity in the frontend and the impossibility of using the algorithm for SSA repair after IR modifications.
The extra complexity in the frontend can hopefully be hidden behind an abstraction, and a lot of serious compilers end up using a separate implementation of SSA construction for repair anyways, so I don't know how big of a deal these drawbacks really are.
You can write compilers without, but for most optimizations you'll need to convert to SSA. what makes SSA so special for compiler construction It makes doing immutability and loop invariant reasoning a ton easier, along with constant propagation: https://en.wikipedia.org/wiki/Static_single_assignment_form More generally, SSA means implicit use/def information.
Use/def information and traversal are critical components of many static analyses and transformations: having the IR in SSA form greatly simplifies reasoning about and implementing these analyses and transformations.
I am one of the authors of the "highly-optimized implementation of Cytron et al.'s algorithm" that is used in LLVM.
That is a bit of a misattribution; I changed LLVM to use the DJ graph algorithm by Sreedhar and Gao before they took their measurements for the paper. What: The IR (internal represention) is in SSA form is if every variable is defined once, and every use of a variable refers to exactly one definition.Without SSA, you might have more definitions of that variable, and so not know it's exact value, type, etc, when you go to use it.Without SSA form, to get that data you'd have to analyse the flow of the program (look at loops and if statements, etc).With SSA form, you can ignore those and get a similar level of precision.Memory usage: Without SSA form, an analysis must store information for every variable at every program point.This is a great question, and its one that isn't really made clear anywhere iin compiler books that teach it.The tl;dr is that you get lots of nice properties for free, and that it makes writing analysis algorithms much simpler with fewer edge cases.In English: When storing the results of your analysis, you can save them much more cheaply in SSA form, from a CPU and memory perspective.Without SSA, you need a hashtable of variables, with a list of values each.Flow-sensitivity: A flow-insensitive algorithm performed on an SSA form is much more precise than if SSA form were not used.The flow-insensitive problem of multiple definitions to the same variable is solved by the single assignment property.