Don't load whole SQL file into memory at once
Bug #1188634 reported by
Daniel Holbach
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
D-A-T Overview |
Fix Released
|
Medium
|
Andrew Starr-Bochicchio |
Bug Description
Right now we load the entire 3gb SQL file into memory at once which will always make problems with smaller servers.
Related branches
lp:~andrewsomething/dat-overview/use_new_dump
- Daniel Holbach: Approve
-
Diff: 142 lines (+35/-76)2 files modifiedoverview/uploads/common/udd.py (+0/-64)
overview/uploads/management/commands/get-udd-data.py (+35/-12)
Changed in dat-overview: | |
status: | New → In Progress |
importance: | Undecided → Medium |
assignee: | nobody → Andrew Starr-Bochicchio (andrewsomething) |
Changed in dat-overview: | |
status: | In Progress → Fix Released |
To post a comment you must log in.
In the meeting, I suggested that maybe we could connect to UDD programmatically. Turns out UDD only accepts guest connections from wagner.debian.org and quantz.debian.org So that's a no go, but since I have access to those machines I set up a cron job on wagner to dump just the ubuntu_ upload_ history table than scp it over to my public_html on alioth.debian.org:
http:// alioth. debian. org/~asb/ udd/ubuntu_ upload_ history. sql
Right now, it is running once a day at 22 UTC. If we start using that rather than the full dump of UDD, it should help with both the memory and bandwidth issues. It is about 159M