New approach to big sampledata
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
KARL3 |
Fix Released
|
Low
|
Shane Hathaway |
Bug Description
At the beginning of KARL3, we wanted to get an idea of the performance when you had some data in the site. "Performance" was somewhat amorphous: reads vs. writes on various screens, searches, memory footprint, using ab to pound the site, whatever. I wrote a console script and checked in some sample data that would bulk-load the site.
This console script had gotten bit rotted and it was somewhat dumb to have large XML files checked into the src/osi customization package for testing purposes. So I removed this data. I haven't yet removed the console script.
Spec
=======
- Console script that uses ZEO to create small, medium, and large example sites
- Use the fake-view method from src/osi/
- Wire up unit tests to see if the basics of the sample data loader get out of sync. Design the script with this testability in mind.
- Find some way (lorem ipsum module, ispell dict) to get large amounts of sample data that contains unique words
- Use some small .doc and .pdf files for files
- Try to get a representative sample of content types into communities (blog entries, files, wiki pages, calendar events)
Tasks
=====
- Read the src/osi/
- Then, svn remove it and remove the entry point in src/osi/setup.py
- Create in src/karl somewhere (or karlsample) a new bulksample.py or some other named script
- Document in the main README.txt
Changed in karl3: | |
assignee: | Carlos de la Guardia (cguardia) → Shane Hathaway (shane-hathawaymix) |
Changed in karl3: | |
status: | Fix Committed → Fix Released |
Shane, I know you're out today and tomorrow. When you get a chance, could you give an update on this?