I am noticing a situation where a program begins to compile very slowly. The program has some generic functions with a large number of methods on many classes. When I profile the compile, 95% of the time is being spent below these functions (each 95+% of time in the profile graph):
None of this happens when the fasl file is loaded. A workaround is to compile these files separately then quickload the fasl files, but that's a stopgap measure, not a solution.
It seems likely to me that a poorly scaling algorithm is being used to compute something that isn't used for anything. Can this be sped up by computing and caching the gf signature lazily, when needed?
The program where this is coming up is a real program, not an artificial test case.
I am noticing a situation where a program begins to compile very slowly. The program has some generic functions with a large number of methods on many classes. When I profile the compile, 95% of the time is being spent below these functions (each 95+% of time in the profile graph):
SB-PCL: :REAL-ENSURE- GF-USING- CLASS-- GENERIC- FUNCTION :NOTE-GF- SIGNATURE :FTYPE- DECLARATION- FROM-LAMBDA- LIST :COMPUTE- GF-FTYPE :FAST-METHOD SB-PCL: :GENERIC- FUNCTION- PRETTY- ARGLIST (STANDARD- GENERIC- FUNCTION) )
SB-PCL:
SB-PCL:
SB-INT:GLOBAL-FTYPE
SB-PCL:
(SB-PCL:
(each of those calls the next, each with about the same number of calls) followed by
SB-PCL: :UNPARSE- SPECIALIZERS (73%) :REAL-UNPARSE- SPECIALIZER- USING-CLASS (45%)
SB-PCL:
None of this happens when the fasl file is loaded. A workaround is to compile these files separately then quickload the fasl files, but that's a stopgap measure, not a solution.
It seems likely to me that a poorly scaling algorithm is being used to compute something that isn't used for anything. Can this be sped up by computing and caching the gf signature lazily, when needed?
The program where this is coming up is a real program, not an artificial test case.