Retry flows fails with "Data too long for column 'failure' at row 1"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
taskflow |
Fix Released
|
Undecided
|
Pavlo Shchelokovskyy |
Bug Description
This is very similar to https:/
Octavia (Victoria release) + Taskflow 4.5.1 + SQLAlchemy 1.3.19 + Python 3.6 + MariaDB 10.4.17
While attaching interfaces to amphora and calling Nova API a ConnectionReset
and while attempting to retry, taskflow fails with StorageFailure with a telltale sign of
2022-01-26 09:11:04,773.773 45 ERROR taskflow.
2022-01-26 09:11:04,773.773 45 ERROR taskflow.
2022-01-26 09:11:04,773.773 45 ERROR taskflow.
2022-01-26 09:11:04,773.773 45 ERROR taskflow.
Full trace is too big to upload to paste.opendev.org :-/ (about 95KB) so I attach it here.
It has many nested traces with
The above exception was the direct cause of the following exception:
or
During handling of the above exception, another exception occurred:
due to how exceptions are handled now in Python3 :-)
Current column type for failure is TEXT which is 64KB max (for single-byte encoding, less for Unicode), and judging by the "(92518 characters truncated)" from the log, we overshoot it quite a lot in this example.
Similar to other mentioned issues, we need to raise this JSON field to LARGETEXT.
Changed in taskflow: | |
assignee: | nobody → Pavlo Shchelokovskyy (pshchelo) |
Fix proposed to branch: master /review. opendev. org/c/openstack /taskflow/ +/826722
Review: https:/