AWS Notes

From Federal Burro of Information
Jump to navigationJump to search

List available images

Centos 7 products:

https://aws.amazon.com/marketplace/fulfillment?productId=b7ee8a69-ee97-4a49-9e68-afaee216db2e&launch=oneClickLaunch

accept lic and choose a region -> ami id on this page.

Ubuntu:

aws ec2 describe-images \
 --filter "Name=state,Values=available" \
          "Name=owner-id,Values=099720109477" \ # Ubuntu / Canonical
          "Name=virtualization-type,Values=paravirtual" \
          "Name=root-device-type,Values=instance-store,ebs,ebs-ssd"\
          "Name=architecture,Values=x86_64" \
          "Name=image-type,Values=machine" \
  --output text

list of amazon linux amis:

aws ec2 describe-images --filter "Name=name,Values=amzn*" --query "Images[*].[Name,Description]" --output text

and the tf data clause:

data "aws_ami" "amzn" {
  most_recent      = true
  filter {
    name   = "name"
    values = ["amzn-ami-minimal-hvm-*-x86_64-ebs"]
  }
}

Blat Keys

reference: https://alestic.com/2010/10/ec2-ssh-keys/

#!/bin/sh

keypair=keyname  # or some name that is meaningful to you
publickeyfile=/the/file/me.pub
regions=$(aws ec2 describe-regions \
  --output text \
  --query 'Regions[*].RegionName')

for region in $regions; do
  echo $region
  aws ec2 import-key-pair \
    --region "$region" \
    --key-name "$keypair" \
    --public-key-material "file://$publickeyfile"
done

Text Report

be aware of your "region" set it in command line with --region

List Buckets

aws s3api  list-buckets --query "Buckets[*][Name]" --output text

List Subnets

aws ec2 --query 'Subnets[*].[SubnetId, AvailabilityZone, CidrBlock, AvailableIpAddressCount]' describe-subnets --output text

instances by subnet

aws ec2 --query 'Subnets[*].[SubnetId, CidrBlock]' describe-subnets --output text | while read subnet cidr
do
echo Subnet ${subnet} ${cidr}
for i in `aws ec2 --query 'Reservations[*].Instances[*].[InstanceId]' describe-instances --output text --filter "Name=subnet-id,Values=${subnet}"`
 do
 name=`aws ec2 --query 'Tags[*].[Value]' describe-tags --filter "Name=resource-id,Values=${i}" "Name=key,Values=Name" --out text`
 echo -n ${name} " "
 aws ec2 --query 'Reservations[*].Instances[*].[State.Name, InstanceId, ImageId, PrivateIpAddress, PublicIpAddress, InstanceType]' describe-instances --output text --filter "Name=subnet-id,Values=${subnet}" "Name=instance-id,Values=${i}"
 done
done

Instances

for i in `aws ec2 --query 'Reservations[*].Instances[*].[InstanceId]' describe-instances --output text`
do
name=`aws ec2 --query 'Tags[*].[Value]' describe-tags --filter "Name=resource-id,Values=${i}" "Name=key,Values=Name" --out text`
echo -n ${name} " "
aws ec2 --query 'Reservations[*].Instances[*].[State.Name, InstanceId, ImageId, PrivateIpAddress, PublicIpAddress, InstanceType]' describe-instances --output text --filter "Name=instance-id,Values=${i}"
done


Good ref for getting tags : http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html

getting tags example:

aws ec2 --query 'Tags[*].[Value]' describe-tags --filter "Name=resource-id,Values=<id>"

just an instance's name:

aws ec2 --query 'Tags[*].[Value]' describe-tags --filter "Name=resource-id,Values=<id>" "Name=Key,Value=Name" --out text

list all instance across all regions:

for region in `aws ec2 describe-regions --output text | cut -f3`;
do
 echo -e "\nListing Instances in region:'$region'...";
 aws ec2 describe-instances \
  --query 'Reservations[*].Instances[*].[PrivateIpAddress, InstanceId, ImageId, PublicIpAddress, State.Name, InstanceType]' \
  --output text \
  --region $region
done

Pipe queries

From: https://opensourceconnections.com/blog/2015/07/27/advanced-aws-cli-jmespath-query/

aws ec2 describe-images --owner amazon --query 'Images[?Name!=`null`]|[?starts_with(Name, `aws-elasticbeanstalk`) == `true`]|[?contains(Name, `tomcat7`) == `true`]|[0:5].[ImageId,Name]' --output text

Count elements

eg instances in autoscaling group. there is no "InstanceCount" attribute, but an array of the instances is provides, so you have to Count the "Instances":

use length(x)

aws autoscaling describe-auto-scaling-groups \
--query 'AutoScalingGroups[].[length(Instances),DesiredCapacity,MinSize,MaxSize,CreatedTime,AutoScalingGroupName,LaunchConfigurationName]| sort_by(@, &[0])' \
--output table

You can also count _all_:

length(@)

RDS Parameter group compare

export AWS_DEFAULT_REGION=ca-central-1
aws rds describe-db-parameter-groups
aws rds describe-db-parameters --db-parameter-group-name myparamgroup \
--query 'Parameters[*].[ParameterName, ParameterValue, ParameterValuee, MinimumEngineVersion, Source ]' \
--output text | sort > ca.myparamgroup

dump all zones

for i in `aws route53 list-hosted-zones  --query 'HostedZones[*].[Id]'  --output text | cut -f3 -d/`
do
echo -n "DOMAIN: "
aws route53  get-hosted-zone --id $i --query 'HostedZone.[Name]' --output text
echo -n "ZONEID: "
echo $i;
aws route53 list-resource-record-sets --hosted-zone-id $i;
done > zonereport.txt

Getting the name tag out

how do you extract the "Name" tag in your query?

Tags[?Key==`Name`].Value | [0]

for example:

aws ec2 describe-instances --query 'Reservations[].Instances[].[InstanceId, Tags[?Key==`Name`].Value | [0], State.Name]' --output text

Alerting on activity

http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html

  • cloudwatch-alarms-for-cloudtrail-signin
  • cloudwatch-alarms-for-cloudtrail-authorization-failures

Cloudwatch log filters

http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html

{ ( $.userIdentity.arn = "arn:aws:iam::XXX:user/david.thornton@scalar.ca" ) && ( $.errorCode = "AccessDenied" ) && ( $.userAgent != "[aws-sdk-go/1.12.8 (go1.9; linux; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.10.0-dev]" )}

Make Cloudwatch Alarms with terraform

get a list of load balancers, name and kube service name tag:

for i in `aws elb describe-load-balancers --query "LoadBalancerDescriptions[*].[LoadBalancerName]" --output text`;
do
aws elb describe-tags --load-balancer-names $i --query "TagDescriptions[*].[LoadBalancerName,Tags[?Key=='kubernetes.io/service-name'].Value | [0]]" --output text;
done

send that to a file and then do this to each line:

cat elb_template.temp | perl -e 'while(<STDIN>){~s/\@\@name\@\@/$ARGV[0]/g;~s/\@\@lbname\@\@/$ARGV[1]/g;print}' col1 col2

col1 and col2 are LoadBalancerName and kubernetes.io/service-name.

I used vi to make a script to spit out all the terraform code.

template: elb_template.temp

resource "aws_cloudwatch_metric_alarm" "@@name@@" {
  alarm_actions       = ["arn:aws:sns:region:account:XXX-alarm"]
  alarm_name          = "@@name@@"
  alarm_description   = "@@name@@ latency alarm"
  comparison_operator = "GreaterThanOrEqualToThreshold"

  dimensions = {
    LoadBalancerName = "@@lbname@@"
  }

  evaluation_periods                    = 3
  insufficient_data_actions             = []
  metric_name                           = "Latency"
  period                                = "120"
  statistic                             = "Average"
  threshold                             = "80"
  namespace                             = "AWS/ELB"
  statistic                             = "Average"
  threshold                             = 1
  treat_missing_data                    = "ignore"
}

You will want to hand edit the results to get the "/" out of the terraform identifier.

You must already have the sns topic setup.

list alarms:

aws cloudwatch describe-alarms --query "MetricAlarms[*].[AlarmName]" --output text


S3

versioning + mfa delete

enable:

aws s3api put-bucket-versioning --bucket mybucket --versioning-configuration "MFADelete=Enabled,Status=Enabled" --mfa "mymfaarn specianumber"

check:

aws s3api get-bucket-versioning --bucket mybucket

VPC

VPC Flowlogs to understaning

to athena:

https://docs.aws.amazon.com/athena/latest/ug/vpc-flow-logs.html

CREATE EXTERNAL TABLE IF NOT EXISTS vpc_flow_logs (
  version int,
  account string,
  interfaceid string,
  sourceaddress string,
  destinationaddress string,
  sourceport int,
  destinationport int,
  protocol int,
  numpackets int,
  numbytes bigint,
  starttime int,
  endtime int,
  action string,
  logstatus string
)  
PARTITIONED BY (dt string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ' '
LOCATION 's3://your_log_bucket/prefix/AWSLogs/{subscribe_account_id}/vpcflowlogs/{region_code}/'
TBLPROPERTIES ("skip.header.line.count"="1");

Cludformation

cleaning up failed change sets:

for i in `aws cloudformation list-change-sets --stack-name ${THE_STACK_NAME} | jq '.Summaries | .[] | select(.Status = "FAILED") | "\(.ChangeSetName)"' | sed -e 's/^"//' -e 's/"$//'`
do
echo $i;
aws cloudformation delete-change-set --change-set-name $i --stack-name ${THE_STACK_NAME} ;
done

Not proud of that sed shit.

Also see