Instances launched in a VPC cannot access ec2.archive.ubuntu.com

Bug #615545 reported by Gabriel Nell
44
This bug affects 7 people
Affects Status Importance Assigned to Milestone
cloud-init (Ubuntu)
Fix Released
Medium
Unassigned
Lucid
Won't Fix
Undecided
Unassigned

Bug Description

sources.list is helpfully configured to us-east-1.ec2.archive.ubuntu.com for instances that I launch in US-EAST-1 on EC2. However, instances launched in a Virtual Private Cloud (VPC) can only access machines in their local subnet, private machines on the connected LAN, and the Internet via the VPC tunnel.

Because us-east-1.ec2.archive.ubuntu.com resolves to an internal EC2 10.0.0.0/8 address, instances launched in a VPC will be unable to perform any apt operations. The user must update sources.list to point to us.archive.ubuntu.com to use apt.

Proposed solution:

1) Detect that the machine was launched in a VPC. I'm not sure what the ideal way to determine this is without doing a DescribeInstances. But I did notice that when in a VPC, curl http://169.254.169.254/latest/meta-data/ does not have public-ipv4 and public-hostname listed as a possibility. So perhaps the absence of these could be used to determine it was in a VPC.
2) Fallback to the public us.archive.ubuntu.com (or whatever region appropriate) if us-east-1.ec2.archive.ubuntu.com cannot be reached.

=== SRU Information ===
[Impact]
After launch of an instance in a VPC (virtual private cloud) of EC2, the user must update /etc/apt/sources.list, as cloud-init has selected a mirror that is not available to the instance.

[Development Fix] The simple fix is to query the EC2 metadata service and determine if the instance has booted inside VPC (is_vpc). If so, use the fallback apt source rather than the EC2 specific region source. This was added to in the 10.10 cycle.

[Stable Fix]
Same as development fix.

[Test Case]
 * a.) Boot instance in EC2 in a VPC
 * b.) Boot instance in EC2 not in a VPC
 * Instance 'a' should have 'archive.ubuntu.com' in /etc/apt/sources.list
   * grep "http://archive.ubuntu.com" /etc/apt/sources.list
 * Instance 'b' should have '<region>.ec2.archive.ubuntu.com' in /etc/apt/sources.list
   * az=$(wget http://instance-data/latest/meta-data/placement/availability-zone -O - -q)
   * region=${az%?} ; # az="us-east-1a", region="us-east-1"
   * grep "http://$region.ec2.archive.ubuntu.com" /etc/apt/sources.list

[Regression Potential]
Inside of EC2, the regression potential is almost non-existant. This exact same fix has been in since 10.10.
Outside of EC2, the potential for regression would be in EC2-like clouds that have a metadata service that looks similar to EC2's. Since the fix has been in for > 18 months, the chance of this scenario causing failure is very low.

Related branches

Revision history for this message
Gabriel Nell (gabriel-nell) wrote :
Scott Moser (smoser)
affects: linux-ec2 (Ubuntu) → cloud-init (Ubuntu)
Changed in cloud-init (Ubuntu):
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Scott Moser (smoser) wrote :

| posting from a mail from Gabriel:
|
| The design of VPC is that instances launched in VPC cannot communicate with
| the internal EC2 network. They can only communicate with other instances in
| the subnet, and the machines on the other end of the tunnel. All internet
| traffic is routed through the customer's internet connection. I don't see
| any possibility to enable communication with the internal ec2 network.
|
| Would any of my suggestions listed in the bug work? Eg, use lack of a public
| ip, or adding the public ubuntu archive as a fallback?

Yeah, you're probably right, lack of public ip would probably work. We may be able to check for reachability of the mirror in DataSourceEc2. Right now we just check that it resolves.

2 things you can easily do to work around this issue:
1.) launch instances with user data like:
#cloud-config
apt_mirror: http://us.archive.ubuntu.com/ubuntu/

The above will allow other portions of cloud-config that assume apt is set up properly to work.

2.) grab the apt-mirror from /etc/apt/sources.list and add an entry in /etc/hosts to point to whatever is right for you.

Revision history for this message
Scott Moser (smoser) wrote :

@Gabriel

Could you comment with the output the following command from inside an VPC instance:
python -c 'import boto.utils, pprint; pprint.pprint(boto.utils.get_instance_metadata())'

Revision history for this message
Gabriel Nell (gabriel-nell) wrote :

Sure:

{'ami-id': 'ami-328d655b',
 'ami-launch-index': '0',
 'ami-manifest-path': '(unknown)',
 'block-device-mapping': {'ami': '/dev/sda1',
                          'root': '/dev/sda1',
                          'swap': 'sda3'},
 'hostname': 'ip-10-10-11-5',
 'instance-action': 'none',
 'instance-id': 'i-ea292481',
 'instance-type': 'm1.small',
 'kernel-id': 'aki-754aa41c',
 'local-hostname': 'ip-10-10-11-5',
 'local-ipv4': '10.10.11.5',
 'placement': {'availability-zone': 'us-east-1d'},
 'reservation-id': 'r-6ee23005',
 'security-groups': 'default'}

Revision history for this message
Scott Moser (smoser) wrote :

Gabriel,
  Could you try the attached deb and report back if it fixes the problem?

You can test this by:
- boot instance
- ssh instance
- get this deb (wget or scp)
- sudo dpkg -i cloud-init_0.5.15-0ubuntu3~ppa1_all.deb
- sudo rm /var/lib/cloud/sem/config-apt-update-upgrade.*
- sudo reboot

on next boot, your /etc/apt/sources.list should have archive.ubuntu.com in it. If it does not, try 'rm -Rf /var/lib/cloud' and boot again.

Thanks.

Revision history for this message
Scott Moser (smoser) wrote :

If you'd rather get it from a ppa, the deb attached should appear in my ppa shortly. I just attached because its set to start building in 4 hours.

Revision history for this message
Gabriel Nell (gabriel-nell) wrote :

Nice, it worked! After starting a maverick daily, applying the deb and removing config-apt-update-upgrade.* and reboot my sources.list was pointing to archive.ubuntu.com and and apt-get update worked.

Thanks!

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package cloud-init - 0.5.15-0ubuntu3

---------------
cloud-init (0.5.15-0ubuntu3) maverick; urgency=low

  * do not use ec2 ubuntu archive if instance is VPC (LP: #615545)
 -- Scott Moser <email address hidden> Thu, 16 Sep 2010 04:28:55 -0400

Changed in cloud-init (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in cloud-init (Ubuntu Lucid):
status: New → Confirmed
Scott Moser (smoser)
description: updated
Scott Moser (smoser)
Changed in cloud-init (Ubuntu Lucid):
status: Confirmed → In Progress
Revision history for this message
Martin Pitt (pitti) wrote : Please test proposed package

Hello Gabriel, or anyone else affected,

Accepted cloud-init into lucid-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Changed in cloud-init (Ubuntu Lucid):
status: In Progress → Fix Committed
tags: added: verification-needed
Revision history for this message
Jack Murgia (support-cloudcontrollers) wrote :

Hi Folks,

To whoever manages DNS for this repository: a more elegant solution not requiring an package patches would have been to follow this practice for DNS on EC2.

Try to use CNAMES to the fully-qualified domain name EC2 instead of A records. For example, at the moment you are using:

us-west-1.ec2.archive.ubuntu.com. 600 IN A 10.162.150.127

This address is apparently not routable from the outside world (perhaps to avoid bandwidth charges?)

Had you used a routable EC2 Elastic IP, and a CNAME record pointing to the EC2 assigned FQDN, lookup requests by VPC servers would have the public elastic IP returned like this:

;; ANSWER SECTION:
us-west-1.ec2.archive.ubuntu.com. 600 IN CNAME ec2-108-20-220-125.compute-1.amazonaws.com.
ec2-108-20-220-125.compute-1.amazonaws.com. 300 IN A 108.20.220.125

Lookup requests by VPC servers would have the public elastic IP returned, while instances launched normally in EC2 would receive the private address:

;; ANSWER SECTION:
us-west-1.ec2.archive.ubuntu.com. 600 IN CNAME ec2-108-20-220-125.compute-1.amazonaws.com.
ec2-108-20-220-125.compute-1.amazonaws.com. 300 IN A 10.252.111.96

I've made these addresses up, of course, and I understand you have multiple servers for each hostname, but we use this method with weighted round robin DNS on EC2 as well and it works as in the example above.

Revision history for this message
Eric Hammond (esh) wrote :

+1 for cloudcontrol's recommendation to use CNAMEs. I've been recommending this to Canonical since we were discussing the initial setup of EC2 dedicated repositories. It would have avoided a couple issues that have happened since and would help prevent future problems as AWS releases new features. Amazon has also recommended this.

Revision history for this message
Clint Byrum (clint-fewbar) wrote : Re: [Bug 615545] Re: Instances launched in a VPC cannot access ec2.archive.ubuntu.com

Excerpts from cloudcontrol's message of Thu Jan 12 23:27:07 UTC 2012:
> Hi Folks,
>
> To whoever manages DNS for this repository: a more elegant solution not
> requiring an package patches would have been to follow this practice for
> DNS on EC2.
>
> Try to use CNAMES to the fully-qualified domain name EC2 instead of A
> records. For example, at the moment you are using:
>
> us-west-1.ec2.archive.ubuntu.com. 600 IN A 10.162.150.127
>
> This address is apparently not routable from the outside world (perhaps
> to avoid bandwidth charges?)
>
> Had you used a routable EC2 Elastic IP, and a CNAME record pointing to
> the EC2 assigned FQDN, lookup requests by VPC servers would have the
> public elastic IP returned like this:
>
> ;; ANSWER SECTION:
> us-west-1.ec2.archive.ubuntu.com. 600 IN CNAME ec2-108-20-220-125.compute-1.amazonaws.com.
> ec2-108-20-220-125.compute-1.amazonaws.com. 300 IN A 108.20.220.125
>
> Lookup requests by VPC servers would have the public elastic IP
> returned, while instances launched normally in EC2 would receive the
> private address:
>
> ;; ANSWER SECTION:
> us-west-1.ec2.archive.ubuntu.com. 600 IN CNAME ec2-108-20-220-125.compute-1.amazonaws.com.
> ec2-108-20-220-125.compute-1.amazonaws.com. 300 IN A 10.252.111.96
>
> I've made these addresses up, of course, and I understand you have
> multiple servers for each hostname, but we use this method with
> weighted round robin DNS on EC2 as well and it works as in the example
> above.

Interesting, I didn't know that Amazon's servers worked this way, responding
with the internal IP.

I believe the EC2 mirrors are currently being migrated to S3:

http://cloud.ubuntu.com/2012/01/regional-s3-backed-ec2-mirrors-available-for-testing/

I am not sure how this will affect VPC instances.

Revision history for this message
Scott Moser (smoser) wrote :

I believe we talked once about doing the CNAME solution, and the decision was made not to implement it. The reason was (from memory) that if we did, all requests would then hit external mirrors, and subsequently we would have to open up all traffic to the ec2 mirrors that canonical IS is running. The decision was not to do that.

So, the other solution here is:
a.) SRU the fix in cloud-init to lucid
b.) wait until the s3 backed mirrors are live

B is expected inside of 30 days, and the SRU takes at least 2 weeks to get into the archive.
We expect that s3 backed mirrors will be open to the world, so this will not be a problem anymore.

Revision history for this message
Eric Hammond (esh) wrote :

Scott:

- With the CNAME solution, the requests still go to the internal IP address for standard EC2 instances.

- I don't imagine that many non-EC2 people would try to configure their Ubuntu systems to use the EC2 repositories.

- Canonical would get charged the same network fees for people outside of EC2 using the S3 solution as using the CNAME solution.

- The CNAME method only requires a change to entries in Canonical's DNS servers, no action is required with SRUs and updates to AMIs.

The only objection I've heard that makes sense is a concern about the risk of increase in cost from use by non-EC2 instances, but it sounds like Canonical is already willing to take that risk with the S3 solution.

This decision doesn't affect me personally. It just seems CNAME is the right approach and I'm not sure why it is not being adopted.

Revision history for this message
Jack Murgia (support-cloudcontrollers) wrote :

Agree with Eric- this will be less expensive in the long run, though potential to use CDN for the mirror is intriguing.

One last point in support of CNAMEs- in case there is unease with relying on the CNAME solution (as Canonical does not control routing in EC2), rest assured that this method is supported and recommended by EC2 engineering team and is used by virtually every large deployment on EC2.

One additional benefit is that, with a low time-to-live value, you can easily replace a troubled repo server with a simple "ec2-associate-elastic-ip" command.

Revision history for this message
Felipe Reyes (freyes) wrote :

The workaround used to know if the instance is inside a VPC isn't working for me, I launched a EC2 instance and I assigned an Elastic IP (all these using cloud formation), when cloud-init gets the metadata this is what it gets:

# curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-ipv4
public-keys/
reservation-id
security-groups

$ curl http://169.2st/meta-data/public-ipv4
184.72.x.x

As you can see the field public-ipv4 appears in the metadata, so cloud-init thinks the instance isn't running in a VPC and sets the apt mirror to us-east1... and it takes me to the original situation. No access to the repositories.

I fixed this behavior with the sugested key in cloud-config.yaml (apt_mirror: http://us.archive.ubuntu.com/ubuntu/).

Revision history for this message
Noah (ncantor) wrote :

I agree with Eric and cloudcontrol, for CNAMEs being the correct solution.

In the meantime, there's a problem with using any debian package from within ec2 instances - you can't contact the repository to install any packages, so using a package to fix the problem presents something of a bootstrapping problem.

That includes, by the way, anything that requires apt-mirror, since apt-mirror is not part of the barebones Lucid AMI, and thus requires installation, which requires connecting to the repo...

I have an alternative solution, which requires no package installation, and which I will be using until this problem is fixed. Rather than installing packages, I have chosen the simpler direct manipulation of the sources.list file, like this:

sed -i -e 's/eu-west-1.ec2/uk/' /etc/apt/sources.list

Change 'uk' to be whatever your closest mirror is, and you'll be running again.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Excerpts from Noah's message of Thu Jan 26 11:26:08 UTC 2012:
> I agree with Eric and cloudcontrol, for CNAMEs being the correct
> solution.
>

The S3 solution is coming very soon, and will negate the need for these
CNAME's, so all we can do is ask for your patience.

> In the meantime, there's a problem with using any debian package from
> within ec2 instances - you can't contact the repository to install any
> packages, so using a package to fix the problem presents something of a
> bootstrapping problem.
>

The fixed package would be included in the next updated AMI's, so this is
actually a viable solution, though I think the better one is to make the
mirrors more accessible, as the S3 implementation will do.

> That includes, by the way, anything that requires apt-mirror, since apt-
> mirror is not part of the barebones Lucid AMI, and thus requires
> installation, which requires connecting to the repo...
>
> I have an alternative solution, which requires no package installation,
> and which I will be using until this problem is fixed. Rather than
> installing packages, I have chosen the simpler direct manipulation of
> the sources.list file, like this:
>
>
> sed -i -e 's/eu-west-1.ec2/uk/' /etc/apt/sources.list
>
> Change 'uk' to be whatever your closest mirror is, and you'll be running
> again.
>

You can achieve this with a cloud-init userdata section by specifying the
apt mirror. This is all that is needed:

#cloud-config
apt_mirror: http://uk.archive.ubuntu.com/ubuntu/

Revision history for this message
Scott Moser (smoser) wrote :

On Thu, 26 Jan 2012, Clint Byrum wrote:

> Excerpts from Noah's message of Thu Jan 26 11:26:08 UTC 2012:

> You can achieve this with a cloud-init userdata section by specifying the
> apt mirror. This is all that is needed:
>
> #cloud-config
> apt_mirror: http://uk.archive.ubuntu.com/ubuntu/

Noah was pointing out that cloud-init in 10.04 does not have that option,
so ... you kind of need to hack with 'sed' there.

Revision history for this message
Steve Langasek (vorlon) wrote :

13:50 < smoser> amazon has since changed some things, and the previous fix that was in -proposed no longer actually fixes anything
13:51 < smoser> so i dropped it.

So the package in lucid-proposed will not be promoted to lucid-updates.

Changed in cloud-init (Ubuntu Lucid):
status: Fix Committed → Won't Fix
tags: added: verification-failed
removed: verification-needed
Revision history for this message
Ian Gibbs (realflash-uk) wrote :

This is still an issue. From reading the comments I'm not seeing a clear "won't fix". It seems to have got lost?

Revision history for this message
Glen Whitney (glen.whitney) wrote :

@Ian Gibbs, are you sure that your instance isn't simply running an end-of-lifed Ubuntu release (such as 19.04 Disco)? That was the difficulty when I was suffering just now from similar behavior as described in this issue. Hope this helps.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.