juju attach does not like to be canceled

Bug #1621658 reported by Matt Bruzek
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
juju
Triaged
High
Unassigned
juju-core
Invalid
Undecided
Unassigned

Bug Description

I was running a `juju attach` command on a unreliable network. I was not getting the file uploaded so I used Ctl+C to cancel the upload. Afterward I was not able to upload the file as I got errors.

$ juju upgrade-charm kubernetes-master --path=/home/mbruzek/workspace/charms/builds/kubernetes-master
$ juju attach kubernetes-master kubernetes=/tmp/kubernetes-v1.3.6.tar.gz
ERROR failed to upload resource "kubernetes": PUT https://54.183.84.103:17070/model/05d1143e-a5ba-4b1e-8858-c17080c2d73e/applications/kubernetes-master/resources/kubernetes: already staged
$ juju attach kubernetes-master kubernetes=/tmp/kubernetes-v1.3.6.tar.gz
ERROR failed to upload resource "kubernetes": PUT https://54.183.84.103:17070/model/05d1143e-a5ba-4b1e-8858-c17080c2d73e/applications/kubernetes-master/resources/kubernetes: Put https://54.183.84.103:17070/model/05d1143e-a5ba-4b1e-8858-c17080c2d73e/applications/kubernetes-master/resources/kubernetes: write tcp 192.168.43.151:35886->54.183.84.103:17070: write: connection reset by peer

I understand Ctl+C is not good to do, but we should have some kind of recovery scenario or good error message to help users with their mistakes.

Tags: sts
Matt Bruzek (mbruzek)
summary: - juju attach does not like to be cancelled
+ juju attach does not like to be canceled
Changed in juju-core:
status: New → Invalid
Changed in juju:
status: New → Triaged
importance: Undecided → Medium
milestone: none → 2.1.0
Revision history for this message
Casey Marshall (cmars) wrote :

I experienced the same thing, except I didn't Ctrl-C the attach. It just timed out after several minutes. The kubernetes resource is about 1G in size.

Revision history for this message
Casey Marshall (cmars) wrote :

Repeatable:

c@zeugmatic:~/src/master-node-split/cluster/juju$ juju attach kubernetes-master kubernetes=kubernetes.tar.gz
ERROR failed to upload resource "kubernetes": PUT https://10.55.32.49:17070/model/482a7b26-aff4-4bd6-8231-90adde29b8d3/applications/kubernetes-master/resources/kubernetes: cannot clean up after failed storage operation because: EOF: cannot add resource "buckets/482a7b26-aff4-4bd6-8231-90adde29b8d3/application-kubernetes-master/resources/kubernetes" to store at storage path "16e016a8-8dff-44ba-88dc-d96723a5d776": failed to write data: write tcp 10.55.32.49:42144->10.55.32.49:37017: write: broken pipe
c@zeugmatic:~/src/master-node-split/cluster/juju$ juju attach kubernetes-master kubernetes=kubernetes.tar.gz
ERROR failed to upload resource "kubernetes": PUT https://10.55.32.49:17070/model/482a7b26-aff4-4bd6-8231-90adde29b8d3/applications/kubernetes-master/resources/kubernetes: already staged

Curtis Hovey (sinzui)
Changed in juju:
milestone: 2.1-rc2 → none
Felipe Reyes (freyes)
tags: added: sts
Revision history for this message
Felipe Reyes (freyes) wrote :

This is the workaround I came up with:

Connect to mongodb and run the following query to delete the staged documents:

To list the documents in staged state:

db.resources.find({"_id": /.*resource#.*#staged/}).pretty();

To remove the documents:

db.resources.deleteMany({"_id": /.*resource#.*#staged/});

Does anyone have any thoughts on why this approach could be wrong or incomplete?, for example other documents referencing the staged objects, etc.

Pen Gale (pengale)
Changed in juju:
importance: Medium → High
milestone: none → 3.0.0
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers