Compile-file twice on same file gets 300x speedup

Bug #2073544 reported by Douglas Katzman
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
SBCL
New
Undecided
Unassigned

Bug Description

The attached CASE expression takes 60 seconds to compile on my macbook, but the second COMPILE-FILE on the same file takes .2 seconds. (hooray for hash-caches)

The compiler is computing pairwise type-union on all the intervals in between keys of the CASE.
Ultimately this CASE gets converted to a perfect hash to make it dense. Did it really have to do all the type algebra to know that it would be a good idea to change the form? Maybe we can compute a measure of sparseness up front, and not permit the compiler to process all the discrete ranges comprising the negation of the key list.

* (time (compile-file"slothy"))
First time:
 61.674 seconds of real time
 61.655733 seconds of total run time (61.485467 user, 0.170266 system)
 [ Real times consist of 0.246 seconds GC time, and 61.428 seconds non-GC time. ]
 [ Run times consist of 0.246 seconds GC time, and 61.410 seconds non-GC time. ]
 99.97% CPU
 2 forms interpreted
 1,807 lambdas converted
 148,017,375,478 processor cycles
 10,977,194,800 bytes consed

Second time:
 0.212 seconds of real time
 0.088560 seconds of total run time (0.087722 user, 0.000838 system)
 41.98% CPU
 2 forms interpreted
 1,806 lambdas converted
 510,589,160 processor cycles
 39,369,536 bytes consed

Revision history for this message
Douglas Katzman (dougk) wrote :
Revision history for this message
Stas Boukarev (stassats) wrote :

Intersection itself needs to be faster.

(defun slow (x)
  (typep x '(member . #.*keylist*)))

has the same problem.

Revision history for this message
Stas Boukarev (stassats) wrote :

Ultimately, unions of numeric types have to be represented by a separate numeric-type-union structure with a vector of sorted ranges.

Revision history for this message
Paul F. Dietz (paul-f-dietz) wrote :

Would it make sense to limit the complexity of numeric types by widening them if they get too many intervals? I doubt completely accurately representing them would buy much optimization.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.