FATAL FwdState::noteDestinationsEnd exception: opening()
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
squid (Ubuntu) |
Fix Released
|
Low
|
Athos Ribeiro | ||
Jammy |
Fix Released
|
Low
|
Athos Ribeiro |
Bug Description
[ Impact ]
From the bug fix at https:/
"""
FwdState used to check serverConn to decide whether to open a connection
to forward the request. ... a nil serverConn pointer
does not imply that a new connection should be opened: FwdState
helper jobs may already be working on preparing an existing open
connection (e.g., sending a CONNECT request or negotiating encryption).
Bad serverConn checks in both FwdState:
FwdState:
calls creating two conflicting concurrent helper jobs.
"""
This leads squid to random crashes such as
FATAL FwdState:
[ Test plan ]
Since we have no reliable reproducer for the issue, we will need to rely on the upstream test suite introduced with the fix and on user tests in -proposed and post release reports.
[ Where problems could occur ]
The regression potentials for the changeset being applied here is listed at LP: #2013423
[ Other Info ]
This bug is being fixed as part of the squid 5.7 MRE at LP: #2013423.
[ Original message]
Squid 5.2 shipped with jammy is affected by a bug that causes random crashes. The underlying issue was fixed in version 6 and backported to 5.4.1.
Additional information:
Original PR for v6: https:/
Original bug report: https:/
5.4.1 backport: https:/
CVE References
Changed in squid (Ubuntu): | |
status: | Incomplete → New |
tags: |
added: server-todo removed: server-triage-discuss |
Changed in squid (Ubuntu): | |
assignee: | nobody → Athos Ribeiro (athos-ribeiro) |
Changed in squid (Ubuntu Jammy): | |
status: | New → Triaged |
assignee: | nobody → Athos Ribeiro (athos-ribeiro) |
Changed in squid (Ubuntu): | |
status: | Triaged → Fix Released |
Changed in squid (Ubuntu Jammy): | |
importance: | Undecided → Low |
description: | updated |
Hi laszloj,
The upstream bug #5055 mentions "This change has several overlapping parts. Unfortunately, merging individual parts is both difficult and likely to cause crashes." Indeed, it looks like several upstream commits would need backported, involving a non-trivial amount of code changes. Due to that, it is especially important to define a reproduction case; otherwise it's hard to know for certain that the patches are actually fixing the problem, and it will then be challenging to get this approved for SRU to jammy.
However, we may be able to proceed if you can define the bounds on the random crashes. How often do the crashes happen? E.g. daily? Every 1000 connection requests? Would you be able to script a workload that attempts a bunch of connections that triggers the crash on your hardware?