We have a system whereby we packup some code, build a web server and bundle that all into an AMI, ready to be served on AWS.
It was all working fine until I updated both ansible and packer, we also updated the packer.json file to the new hcl format using the built in convert command.
amazon-ebs.autogenerated_1: fatal: [default]: FAILED! => {"changed": false, "msg": "No package matching 'build-essential' is available"}
ansible [core 2.13.13]Packer v1.10.2
We are using this image as a base for the server.
filters = { name = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*" root-device-type = "ebs" virtualization-type = "hvm" }
Now when ansible starts installing some packages we need, it seems to have an issue with the packages:
- build-essential
- supervisor
in particular.
We have tried different ways of doing apt-get update, both using the "shell" ansible command and the regular "apt" command, with a variation of cache updates etc.
The strange thing is, the more garbage we can print to the screen (using for example --verbose when executing the ansible playbook), or by using "cat" on a file, for example I just cat'd /etc/apt/sources.list before the task to install the packages, the further along to process we get. This leads me to believe that there might be a race condition somewhere, but I cannot for the life of me figure out why.
Here is the command we are using to install packages.
- name: Install common dependencies apt: name: - htop - nano - wget - perl - unzip - tcpdump - build-essential - python3-distutils - libwww-perl - supervisor - mysql-client state: present
Like I mentioned before, this worked flawlessly until updating ansible and packer.