[parser] find a better solution than pulling the entire wiki contents into memory
Bug #1599974 reported by
Joe Talbott
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Snapcraft |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
Facundo Batista recommends something along these lines for the part processing
def splitgen(text, sep):
pos_from = 0
while True:
try:
pos_to = text.index(sep, pos_from)
except ValueError:
yield text[pos_from:]
break
yield text[pos_
pos_from = pos_to + len(sep)
test = [
'foo',
'foo\
'foo\n---\n',
'foo\
'foo\
]
for t in test:
assert t.split("\n---\n") == list(splitgen(t, "\n---\n"))
tags: | added: wiki |
Changed in snapcraft: | |
status: | New → Confirmed |
status: | Confirmed → Incomplete |
status: | Incomplete → Confirmed |
To post a comment you must log in.