Thanks Russell! The AWS docs don't always have all the sample code you'd like to see, especially when it comes to filters. This bit of code helped me with the 'tag:key' filters which I was having trouble generating based on boto3 documentation. Nice Job :)
As with the other poster thank you for the examples, they helped a lot. However I am looking for one other function... and I may have to split it up... but since these commands have to traverse the internet before producing output, they are very costly and I would like to use it only once.
That being said... my goal is to use describe_instances to display the configuration of multiple instances where I know the instance-id of approximately 8 and only specific tags of 2-3... I was trying to do something like this:
Unfortunately it did not work as planned. I believe what is happening is that "and" logic is applied when using the two filters this way. What I am looking to accomplish is an "or" statement so that I can get what I need without running such an expensive command more than once.
I am trying to write a backup script that... is largely unnecessary for a customer to back up their 8 auto-scaling instances... (although I keep telling them that there is no benefit to backing up these instances.) I only need to backup one instance from each group, since they are all the same.
I can get one instance ID for each group from the describe_auto_scaling_groups() function. that is why I know the Instance ID's of some instances.
However (and comparatively more importantly) their 2-3 staging instances, which may or may not be destroyed and recreated from time to time (causing their IDs to change) needs to be identified via their tag. Once I have these instance IDs I will use them to get their attached EBS volumes (likely only the staging instances) and store them together in a dictionary. Then I plan to take that dictionary to make snapshots for every instance.
I ended up coming to the same conclusion you did as well, and created my own function to parse the output of the describe_instances() function.
sometimes I have trouble translating my English to Japanese, and that may be the cause for the customer not understanding that there is no need to backup the auto-scaling instances... but since we are still in the design phase I'm sure that the requirements will change.
Personally I prefer to use the Boto3 "resource" connection over the lower level "client" connection, where possible.
With the ec2 resource you can ask for every instance object in the aws account / region. To know if an instance is autoscaling, you can look for the 'aws:autoscaling:groupName' tag/key. The value of this key is the autoscaling group name.
Integrate the new snapshots(AMIs) with their auto-scaling group's AMI
Delete Old AMI's
...
Although unless the customer has a valid reason for wanting to backup auto-scaling instances... I'm pretty sure I'll be convincing them to only backup their staging instances.
I'm think going to re-write the code using the resource connection. It sounds much easier to read, write, and understand.
Can I exclude certain tags? Say I want to exclude instances which have a certain tag (which might, for example, indicate that they are to be scaled down), how would I do it?
I don't think this is currently possible using Boto3 (https://github.com/boto/boto3/issues/173). There might be a exclusion filter for Boto2. My suggestion is to fetch instances and then do the filtering yourself using Python code if you really want to use Boto3.
Do you have incantations to return the value of a particular tag for a particular EC2 instance queried by name or instance-id? I promise I'm not asking you to answer my homework question. ;)
Right now I'm just using describe_instances with a Filter= on the tag:Name but that gives me the entire set of instance metadata when I really only care about one particular tag.
Hi, I want to filter the particular instance & want to check whether particular instance is Running or not in AWS by using python boto3. You have any idea about this.?
How can I Tag an instance, while creating with boto3? I am doing this way.
instances = ec2.create_instances(
ImageId=image_dict[image_name],
MinCount=1,
MaxCount=1,
SecurityGroupIds=[securitygroup_dict[security_group]],
SubnetId=subnet_dict[subnet_name],
InstanceType="t2.micro"
)
I want to add a tag to the instance
Hi As I see to get the instance metadata i need to use the filter function. but can it be used in a lambda function.
Means I am using ec2 = boto3.resource('ec2', region_name='us-west-2')
instances=ec2.instances.filter(Filters=[{'Name':'instance-state-name','Values':['running']}])
for instance in instances:
print(instance.id,instance.instance_type)
I am not getting any result in my lambda function
I am using lambda with max 512 MB of memory. Lets say when i describe ecs services, is there a way I can filter result.
Because I don't want entire description of service. Probably running count and pending count is all good for me.
Yes I can write my own function to cascade with results, but I want to save compute time on lambda. Any suggestions ?
As far as I can tell the boto3 client for ECS does not support the ability to trim down the response document.
I think if I wanted to speed up execution I would have a separate service to query and then cache the results. I would then query the cache instead of working directly with real time data. I'm not sure if your problem can deal with slightly stale data.
My questions for you:
How long is your AWS Lambda execution time right now?
How certain are you that waiting for the ECS service descriptions is the slowest part of the current implementation?
How often does your AWS Lambda run?
If you could have instant ECS service descriptions, how much would you really save?
Is this really the best cost saving problem you could be working on?
I'm naive of why you want to speed up execution times. If you want to save on cost, I personally wouldn't bother. Engineering a robust solution to speed up execution time is likely going to cost more than what your Lambda bill will be.
If it's to speed up a long pipeline of dependent tasks, I think I would try engineering some sort of caching service instead of working directly with AWS API responses.
is it possible to filter a tag name whose value might be 'webapp1' and 'webapp2' but not 'webapp3'(!= 'webapp3')
how can we add a not in the filter statement?
or you can give any link where using AWS CLI or python i can get all openshift cluster from AWS,
My motto is to know cluster creation date and want to calculate cost based on that
How do i filter particular fields from a response.
What if i want to print particular fields from a describe_image response.
response = client.describe_images(ImageIds=[ami])
So reponse being a dictionary which captures the response, what if i want extract specific fields like imageid, imagelocation what shall i do
I want to use two filter first will search name of the instances whose name started by ram and then if it is stopped , is this right ?
ec2_result = ec2.describe_instances(Filters=[
{'Name': 'tag:Name',
'Values': ['?ram*'],
'Name': 'instance-state-name',
'Values': [
'stopped',
'running'
]
},
])
Comments
Thanks Russell! The AWS docs don't always have all the sample code you'd like to see, especially when it comes to filters. This bit of code helped me with the 'tag:key' filters which I was having trouble generating based on boto3 documentation. Nice Job :)
export
Russell,
As with the other poster thank you for the examples, they helped a lot. However I am looking for one other function... and I may have to split it up... but since these commands have to traverse the internet before producing output, they are very costly and I would like to use it only once.
That being said... my goal is to use describe_instances to display the configuration of multiple instances where I know the instance-id of approximately 8 and only specific tags of 2-3... I was trying to do something like this:
Unfortunately it did not work as planned. I believe what is happening is that "and" logic is applied when using the two filters this way. What I am looking to accomplish is an "or" statement so that I can get what I need without running such an expensive command more than once.
Do you know of such a way?
Thanks,
Scott
export
How many instances do you have in this AWS account and region?
To solve this, I think I would:
I also cannot seem to find anything in the docs which describe an "or" functionality.
In the docs, Collection methods are chainable which seems to lends itself to an "and" behavior.
export
I suppose I would like to hear more about your goal you are attempting to solve and less about this specific problem.
export
I am trying to write a backup script that... is largely unnecessary for a customer to back up their 8 auto-scaling instances... (although I keep telling them that there is no benefit to backing up these instances.) I only need to backup one instance from each group, since they are all the same.
I can get one instance ID for each group from the describe_auto_scaling_groups() function. that is why I know the Instance ID's of some instances.
However (and comparatively more importantly) their 2-3 staging instances, which may or may not be destroyed and recreated from time to time (causing their IDs to change) needs to be identified via their tag. Once I have these instance IDs I will use them to get their attached EBS volumes (likely only the staging instances) and store them together in a dictionary. Then I plan to take that dictionary to make snapshots for every instance.
I ended up coming to the same conclusion you did as well, and created my own function to parse the output of the describe_instances() function.
sometimes I have trouble translating my English to Japanese, and that may be the cause for the customer not understanding that there is no need to backup the auto-scaling instances... but since we are still in the design phase I'm sure that the requirements will change.
export
I'll share the code I've written so far.
I only began coding recently so there may be inefficiencies but feel free to have a look.
http://pastebin.com/HNGvZ8j7
export
I think you are on the right track.
Personally I prefer to use the Boto3 "resource" connection over the lower level "client" connection, where possible.
With the ec2 resource you can ask for every instance object in the aws account / region. To know if an instance is autoscaling, you can look for the 'aws:autoscaling:groupName' tag/key. The value of this key is the autoscaling group name.
This will simplify your calls to AWS API.
export
export
I like your code there it is much simpler than navigating through the dictionaries.
export
And point 4. is:
... Although unless the customer has a valid reason for wanting to backup auto-scaling instances... I'm pretty sure I'll be convincing them to only backup their staging instances.
I'm think going to re-write the code using the resource connection. It sounds much easier to read, write, and understand.
export
perfect, glad I could help!
export
how would you make the filter case insensitive?
export
Personally, I would fix my tags to lowercase (except for the Name tag). I would fix / normalize the data.
export
Can I exclude certain tags? Say I want to exclude instances which have a certain tag (which might, for example, indicate that they are to be scaled down), how would I do it?
export
I don't think this is currently possible using Boto3 (https://github.com/boto/boto3/issues/173). There might be a exclusion filter for Boto2. My suggestion is to fetch instances and then do the filtering yourself using Python code if you really want to use Boto3.
export
Thank you so much, this took me forever to find this blog post, but now my code works!
export
You are very welcome!
export
Thank you ! Unfortunately as a newbie to coding the Boto3 documentation is very poor and confusing. Once again thank you for some clarity
Mario
export
Do you have incantations to return the value of a particular tag for a particular EC2 instance queried by name or instance-id? I promise I'm not asking you to answer my homework question. ;)
Right now I'm just using describe_instances with a Filter= on the tag:Name but that gives me the entire set of instance metadata when I really only care about one particular tag.
export
Hi, I want to filter the particular instance & want to check whether particular instance is Running or not in AWS by using python boto3. You have any idea about this.?
export
Boto3 has an Instance object which has a method called state: EC2.Instance.state
You can use normal Python to test if it is running.
export
How can I Tag an instance, while creating with boto3? I am doing this way.
instances = ec2.create_instances( ImageId=image_dict[image_name], MinCount=1, MaxCount=1, SecurityGroupIds=[securitygroup_dict[security_group]], SubnetId=subnet_dict[subnet_name], InstanceType="t2.micro" ) I want to add a tag to the instance
export
How can I get all details with out filtering like Group Name, Description etc. in Security group?
export
Hi As I see to get the instance metadata i need to use the filter function. but can it be used in a lambda function. Means I am using ec2 = boto3.resource('ec2', region_name='us-west-2') instances=ec2.instances.filter(Filters=[{'Name':'instance-state-name','Values':['running']}]) for instance in instances: print(instance.id,instance.instance_type) I am not getting any result in my lambda function
export
Thanks Russell! It does so much help!!
export
I want to list snapshots whose start time is less than 'somedate'. How do I mention <(less than) in filters of
describe_snapshotsec2.describe_snapshots(
Filters=[ { 'Name': 'start-time'}]export
You can't do it with a Boto3 filter, but you can simply request all snapshots and filter the description documents yourself using Python.
export
Hi Rusell,
I am using lambda with max 512 MB of memory. Lets say when i describe ecs services, is there a way I can filter result. Because I don't want entire description of service. Probably running count and pending count is all good for me. Yes I can write my own function to cascade with results, but I want to save compute time on lambda. Any suggestions ?
export
As far as I can tell the
boto3client forECSdoes not support the ability to trim down the response document.I think if I wanted to speed up execution I would have a separate service to query and then cache the results. I would then query the cache instead of working directly with real time data. I'm not sure if your problem can deal with slightly stale data.
My questions for you:
AWS Lambdaexecution time right now?ECSservice descriptions is the slowest part of the current implementation?AWS Lambdarun?ECSservice descriptions, how much would you really save?I'm naive of why you want to speed up execution times. If you want to save on cost, I personally wouldn't bother. Engineering a robust solution to speed up execution time is likely going to cost more than what your
Lambdabill will be.If it's to speed up a long pipeline of dependent tasks, I think I would try engineering some sort of caching service instead of working directly with AWS API responses.
export
excellent !
export
is it possible to filter a tag name whose value might be 'webapp1' and 'webapp2' but not 'webapp3'(!= 'webapp3') how can we add a not in the filter statement?
export
I would ask the API for
webapp*and filter the result using Python.export
Hi Russell,
Is there any way we can get all the openshfit cluster using boto3 from AWS.
export
or you can give any link where using AWS CLI or python i can get all openshift cluster from AWS, My motto is to know cluster creation date and want to calculate cost based on that
export
Hi Russell,
How do i filter particular fields from a response.
What if i want to print particular fields from a describe_image response.
response = client.describe_images(ImageIds=[ami]) So reponse being a dictionary which captures the response, what if i want extract specific fields like imageid, imagelocation what shall i do
Thanks in advance
export
How can I filter all ec2 instances in all regions based on the ec2 tag?
export
I want to use two filter first will search name of the instances whose name started by ram and then if it is stopped , is this right ?
ec2_result = ec2.describe_instances(Filters=[ {'Name': 'tag:Name', 'Values': ['?ram*'], 'Name': 'instance-state-name', 'Values': [ 'stopped', 'running' ] }, ])export