Russell,
As with the other poster thank you for the examples, they helped a lot. However I am looking for one other function... and I may have to split it up... but since these commands have to traverse the internet before producing output, they are very costly and I would like to use it only once.
That being said... my goal is to use describe_instances to display the configuration of multiple instances where I know the instance-id of approximately 8 and only specific tags of 2-3... I was trying to do something like this:
client=boto3.client("ec2") filter1={"Name":"tag:Name","Values":["my_tag"]} filter2={"Name":"instance-id","Values":["i-xxxxxxxxxxxxxxxxx"]} client.describe_instances(Filters=[filter1,filter2])["Reservations"])
Unfortunately it did not work as planned. I believe what is happening is that "and" logic is applied when using the two filters this way. What I am looking to accomplish is an "or" statement so that I can get what I need without running such an expensive command more than once.
Do you know of such a way?
Thanks,
Scott
How many instances do you have in this AWS account and region?
To solve this, I think I would:
- request all instance descriptions
- write my own filter function / algorithm using Python
I also cannot seem to find anything in the docs which describe an "or" functionality.
In the docs, Collection methods are chainable which seems to lends itself to an "and" behavior.
I suppose I would like to hear more about your goal you are attempting to solve and less about this specific problem.
I am trying to write a backup script that... is largely unnecessary for a customer to back up their 8 auto-scaling instances... (although I keep telling them that there is no benefit to backing up these instances.) I only need to backup one instance from each group, since they are all the same.
I can get one instance ID for each group from the describe_auto_scaling_groups() function. that is why I know the Instance ID's of some instances.
However (and comparatively more importantly) their 2-3 staging instances, which may or may not be destroyed and recreated from time to time (causing their IDs to change) needs to be identified via their tag. Once I have these instance IDs I will use them to get their attached EBS volumes (likely only the staging instances) and store them together in a dictionary. Then I plan to take that dictionary to make snapshots for every instance.
I ended up coming to the same conclusion you did as well, and created my own function to parse the output of the describe_instances() function.
sometimes I have trouble translating my English to Japanese, and that may be the cause for the customer not understanding that there is no need to backup the auto-scaling instances... but since we are still in the design phase I'm sure that the requirements will change.
I'll share the code I've written so far.
I only began coding recently so there may be inefficiencies but feel free to have a look.
http://pastebin.com/HNGvZ8j7
I think you are on the right track.
Personally I prefer to use the Boto3 "resource" connection over the lower level "client" connection, where possible.
With the ec2 resource you can ask for every instance object in the aws account / region. To know if an instance is autoscaling, you can look for the 'aws:autoscaling:groupName' tag/key. The value of this key is the autoscaling group name.
This will simplify your calls to AWS API.
- get all instance objects: EC2.ServiceResource.instances
- iterate over objects collecting one from each autoscaling group (store in a list)
- iterate over list of instance objects needing a backup, access their volumes property and fire a snapshot
- ???
- profit
load more (2 remarks)
Can I exclude certain tags? Say I want to exclude instances which have a certain tag (which might, for example, indicate that they are to be scaled down), how would I do it?
I don't think this is currently possible using Boto3 (https://github.com/boto/boto3/issues/173). There might be a exclusion filter for Boto2. My suggestion is to fetch instances and then do the filtering yourself using Python code if you really want to use Boto3.
Do you have incantations to return the value of a particular tag for a particular EC2 instance queried by name or instance-id? I promise I'm not asking you to answer my homework question. ;)
Right now I'm just using describe_instances with a Filter= on the tag:Name but that gives me the entire set of instance metadata when I really only care about one particular tag.
Hi, I want to filter the particular instance & want to check whether particular instance is Running or not in AWS by using python boto3. You have any idea about this.?
Boto3 has an Instance object which has a method called state: EC2.Instance.state
You can use normal Python to test if it is running.
How can I Tag an instance, while creating with boto3? I am doing this way.
instances = ec2.create_instances( ImageId=image_dict[image_name], MinCount=1, MaxCount=1, SecurityGroupIds=[securitygroup_dict[security_group]], SubnetId=subnet_dict[subnet_name], InstanceType="t2.micro" ) I want to add a tag to the instance
Hi As I see to get the instance metadata i need to use the filter function. but can it be used in a lambda function. Means I am using ec2 = boto3.resource('ec2', region_name='us-west-2') instances=ec2.instances.filter(Filters=[{'Name':'instance-state-name','Values':['running']}]) for instance in instances: print(instance.id,instance.instance_type) I am not getting any result in my lambda function
I want to list snapshots whose start time is less than 'somedate'. How do I mention <(less than) in filters of describe_snapshots
ec2.describe_snapshots(
Filters=[ { 'Name': 'start-time'}]
Hi Rusell,
I am using lambda with max 512 MB of memory. Lets say when i describe ecs services, is there a way I can filter result. Because I don't want entire description of service. Probably running count and pending count is all good for me. Yes I can write my own function to cascade with results, but I want to save compute time on lambda. Any suggestions ?
As far as I can tell the boto3
client for ECS
does not support the ability to trim down the response document.
I think if I wanted to speed up execution I would have a separate service to query and then cache the results. I would then query the cache instead of working directly with real time data. I'm not sure if your problem can deal with slightly stale data.
My questions for you:
- How long is your
AWS Lambda
execution time right now? - How certain are you that waiting for the
ECS
service descriptions is the slowest part of the current implementation? - How often does your
AWS Lambda
run? - If you could have instant
ECS
service descriptions, how much would you really save? - Is this really the best cost saving problem you could be working on?
I'm naive of why you want to speed up execution times. If you want to save on cost, I personally wouldn't bother. Engineering a robust solution to speed up execution time is likely going to cost more than what your Lambda
bill will be.
If it's to speed up a long pipeline of dependent tasks, I think I would try engineering some sort of caching service instead of working directly with AWS API responses.
Hi Russell,
How do i filter particular fields from a response.
What if i want to print particular fields from a describe_image response.
response = client.describe_images(ImageIds=[ami]) So reponse being a dictionary which captures the response, what if i want extract specific fields like imageid, imagelocation what shall i do
Thanks in advance
I want to use two filter first will search name of the instances whose name started by ram and then if it is stopped , is this right ?
ec2_result = ec2.describe_instances(Filters=[
{'Name': 'tag:Name',
'Values': ['?ram*'],
'Name': 'instance-state-name',
'Values': [
'stopped',
'running'
]
},
])