Can you point CloudFront at the repo bucket to make a public repo?
Hi, nice post! I use this same process for deployment, S3 + DEB Packages + APT from individual hosts, but I found this gem: https://github.com/krobertson/deb-s3 that takes care of the package reindex issue that you describe using lambda. Check it out, I think it saves you a lot of time and effort.
Thanks for the suggestion. I chose not to use deb-s3 mainly because I saw the word gem in there. However, it looks like using deb-s3, you have to have a local copy of the packages you are uploading to S3. My workflow is a little different - the packages stay in S3 the whole time. We keep a list of which .debs go in a release and copy those from one S3 bucket to a folder in the apt repo S3 bucket, and the package index gets automatically generated.
What if the whole process takes more than 5 mins? I thought AWS lambda execution time is (currently) constricted to 5 mins max.
With this design, each .deb added to the s3 bucket will trigger a lambda that will create the cached control data. This is the most time-consuming part and will depend on how large the .deb is. It would have to be a monster .deb for extracting the control file to take more than 5 minutes, and at that point the lambda would probably break first due to hitting memory limits. The next step is to read in all that cached control data, which is just reading a bunch of small s3 objects so unless there are (probably) hundreds of thousands of .debs it will probably not take more than 5 minutes.
I would expect the first failure to be hitting a memory limit on a really large package, in which case you could just up the allowed memory. However, the code is careful to not read the file in chunks. You could probably create a bogus control file that would fill up memory when it was read in.