Error: creating EC2 Instance: operation error EC2: Run Instances, https response error StatusCode: 400, error InvalidSubnetID.NotFound: The subnet ID ‘aws_subnet.pubsubnet.id’ does not exist

Hi Techies!

Good day!

We are going to see how to troubleshoot issues in terraform(Alternate Ansible) script which will create VPC in AWS.

In my script the file name is main.tf and we need to execute “terraform init, terraform validate, terraform apply” commands one by one.

terraform init:   It will initialize the directory, which will contains the terraform configuration file(in our scenario main.tf)

terraform validate: It will to check the syntax and consistency of your Terraform configuration files without accessing remote services

terraform plan: It will help to create an execution plan and preview the changes to your infrastructure.

terraform apply: It will execute the proposed actions in a Terraform plan.

In our scenario I’m getting error while applying it (terraform apply)

Error:

Error: creating EC2 Instance: operation error EC2: RunInstances, https response error StatusCode: 400, RequestID: 5eea2384-64e8-4a57-ba1a-2ac955c799f9, api error InvalidSubnetID.NotFound: The subnet ID 'aws_subnet.pubsubnet.id' does not exist
│
│ with aws_instance.pub_instance,
│ on main.tf line 157, in resource "aws_instance" "pub_instance":
│ 157: resource "aws_instance" "pub_instance" {
│
╵
╷
│ Error: creating EC2 Instance: operation error EC2: RunInstances, https response error StatusCode: 400, RequestID: 64d5c412-e7db-49ad-a7aa-49b4f9345d89, api error InvalidSubnetID.NotFound: The subnet ID 'aws_subnet.pvtsubnet.id' does not exist
│
│ with aws_instance.pvt_instance,
│ on main.tf line 172, in resource "aws_instance" "pvt_instance":
│ 172: resource "aws_instance" "pvt_instance" {

Code in my script:

 resource "aws_instance" "pub_instance" {
ami                                     = "ami-033fabdd332044f06"
instance_type                           = "t2.micro"
availability_zone                       = "us-east-2a"
associate_public_ip_address             = "true"
vpc_security_group_ids                  = [aws_security_group.PUBSG.id]
subnet_id                               = "aws_subnet.PUBSUB.id"
key_name                                = "Terraform_Srv"

  tags = {
  Name = "WEBSERVER"
 }

}

resource "aws_instance" "pvt_instance" {
ami                                     = "ami-033fabdd332044f06"
instance_type                           = "t2.micro"
availability_zone                       = "us-east-2b"
associate_public_ip_address             = "true"
vpc_security_group_ids                  = [aws_security_group.PVTSG.id]
subnet_id                               = "aws_subnet.PVTSUB.id"
key_name                                = "Terraform_Srv"

  tags = {
  Name = "APPSERVER"
 }

}

Solution:

In may script I have used double quotes (“”) to mention subnet id. In my case I have removed double quotes and it fixed the issue.

resource "aws_instance" "pub_instance" {
ami                                    = "ami-033fabdd332044f06"
instance_type                          = "t2.micro"
availability_zone                      = "us-east-2a"
associate_public_ip_address            = "true"
vpc_security_group_ids                 = [aws_security_group.PUBSG.id]
subnet_id                              = aws_subnet.PUBSUB.id
key_name                               = "Terraform_Srv"

tags = {
Name = "WEBSERVER"
}

}

resource "aws_instance" "pvt_instance" {
ami                                    = "ami-033fabdd332044f06"
instance_type                          = "t2.micro"
availability_zone                      = "us-east-2b"
associate_public_ip_address            = "true"
vpc_security_group_ids                 = [aws_security_group.PVTSG.id]
subnet_id                              = aws_subnet.PVTSUB.id
key_name                               = "Terraform_Srv"

tags = {
Name = "APPSERVER"
}

}

Now while executing “terraform apply” it created ec2-instance along with VPC successfully.

Result:

#terraform apply

aws_instance.pvt_instance: Creating...
aws_instance.pub_instance: Creating...
aws_route_table.PVTRT: Modifying... [id=rtb-05ab30d4598210e59]
aws_route_table.PVTRT: Modifications complete after 0s [id=rtb-05ab30d4598210e59]
aws_instance.pub_instance: Still creating... [10s elapsed]
aws_instance.pvt_instance: Still creating... [10s elapsed]
aws_instance.pvt_instance: Still creating... [20s elapsed]
aws_instance.pub_instance: Still creating... [20s elapsed]
aws_instance.pub_instance: Still creating... [30s elapsed]
aws_instance.pvt_instance: Still creating... [30s elapsed]
aws_instance.pvt_instance: Creation complete after 31s [id=i-096c28fbbaeff8a42]
aws_instance.pub_instance: Creation complete after 31s [id=i-0f745c3aeca6327aa]
Apply complete! Resources: 2 added, 1 changed, 0 destroyed.
[root@ip-172-31-7-226 terraform]#

 

Unreachable Host: port unreachable

Unreachable Host: port unreachable : port unreachable

I do have access to ssh into the destination machine, and it works, but whenever I run this playbook, I get this error output:

sudo ansible-playbook test.yml PLAY [web] ***************************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************************** fatal: [machine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).\r\n", "unreachable": true} to retry, use: --limit @/ansible-play/test.retry PLAY RECAP *********************************************************************************************************************************************************************************************** machine : ok=0 changed=0 unreachable=1 failed=0

Solution 1:

Try to check the SSH arguments and I used below, and it helps me sometime.

#ansible-playbook --user=brines -vvv test.yml

Solution 2:

Invalid SSH Configuration also may lead this issue. So, hvae to fix the SSH configuration issue or copy & paste the ssh keys on concern hosts.

#cd /root/.ssh 
#ssh-keygen -t rsa

save key under the name of id_rsa

#cat id_rsa.pub

copy the entire key and paste in file (of master node located at path: /.ssh/ or /root/.ssh) as:

#vi authorized_keys

Then run this to check:

#ansible all -m ping -u brines

Output should be like this:

master-node | SUCCESS => { "changed": false, "ping": "pong" }