# Terraform

***

[Terraform Registr](https://registry.terraform.io/)[y](https://registry.terraform.io/) where you can find providers and modules

Terraform data sources: <https://developer.hashicorp.com/terraform/language/data-sources>

CIDR calculator: <https://cidr.xyz/>

Terraform Sandbox environment for practice: <https://developer.hashicorp.com/terraform/sandbox>

[Terragrunt](https://terragrunt.gruntwork.io/) - IaC Orchestrator

Advanced techniques: <https://www.hashicorp.com/en/resources/advanced-terraform-techniques>

***

### **How to install Terraform**

**Install the CLI:** [**https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli**](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)

Install tab autocomplete or Terraform commands:&#x20;

```bash
terraform -install-autocomplete
```

#### Terraform commands

Explore Terraform commands by running:

```bash
terraform -help
```

<details>

<summary>Output of <code>terraform -help</code></summary>

```bash
Usage: terraform [global options] [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.

Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure

All other commands:
console Try Terraform expressions at an interactive command prompt
fmt Reformat your configuration in the standard style
force-unlock Release a stuck lock on the current workspace
get Install or upgrade remote Terraform modules
graph Generate a Graphviz graph of the steps in an operation
import Associate existing infrastructure with a Terraform resource
login Obtain and save credentials for a remote host
logout Remove locally-stored credentials for a remote host
metadata Metadata related commands
modules Show all declared modules in a working directory
output Show output values from your root module
providers Show the providers required for this configuration
refresh Update the state to match remote systems
show Show the current state or a saved plan
state Advanced state management
taint Mark a resource instance as not fully functional
test Execute integration tests for Terraform modules
untaint Remove the 'tainted' state from a resource instance
version Show the current Terraform version
workspace Workspace management

Global options (use these before the subcommand, if any):
-chdir=DIR Switch to a different working directory before executing the
given subcommand.
-help Show this help output, or the help for a specified subcommand.
-version An alias for the "version" subcommand.
```

</details>

You can also explore a particular command by running `terraform plan -help`

***

### **Terraform Basics**

Terraform is an open-source infrastructure as code (IaC) tool created by HashiCorp. It allows you to define and provision infrastructure using a declarative configuration language.

**Infrastructure as Code**: Instead of manually setting up servers, networks, and other infrastructure through cloud provider consoles, you write configuration files that describe what you want. Terraform then creates and manages those resources for you.

**Provider Support**: Terraform works with numerous cloud providers (AWS, Azure, Google Cloud, etc.) and services through plugins called providers. This means you can manage resources across different platforms using the same tool and configuration language. You can also spin up Docker or Kubernetes infrastructure.

**Why People Use It**

* **Reproducibility**: Create identical environments easily
* **Version Control**: Track infrastructure changes like code
* **Collaboration**: Teams can work together on infrastructure
* **Automation**: Integrate with CI/CD pipelines
* **Multi-cloud**: Manage resources across different providers

#### Configuration Files

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2F8GM3gbUShlvtFRwGN7ei%2FBildschirmfoto%202025-07-05%20um%2016.06.28.png?alt=media&#x26;token=f445aa91-d8d1-4cb5-8f30-bcf122517fb4" alt=""><figcaption></figcaption></figure>

Configuration files are written in a domain-specific language called *HCL (Hashicorp Configuration Language)*.

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FrGXWDxozt8QcPOgw3PP6%2FBildschirmfoto%202025-07-05%20um%2016.09.20.png?alt=media&#x26;token=02266786-5596-4bdc-9cd4-f862b03b3cdf" alt=""><figcaption></figcaption></figure>

Terraform is not exclusive to AWS. In fact it supports many cloud vendors. Example of using GCP as a provider:

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FgPG3q6mNMvB5RbJJI7P7%2FBildschirmfoto%202025-07-05%20um%2016.10.11.png?alt=media&#x26;token=95c0b3e8-4221-4765-9689-40320a0b8f6f" alt=""><figcaption></figcaption></figure>

***

### How Terraform Works (most workflows)

To create resources in Terraform there is a consistent workflow that you’ll follow.

The basic Terraform workflow is:

1. **Write** configuration files (`*.tf`)
2. Run **`terraform init`**\
   Downloads required providers and initializes the working directory.
3. Run **`terraform plan`**\
   Shows the execution plan: resources to create, update, or destroy.
4. Run **`terraform apply` to approve the execution plan and** \
   Executes the plan and reconciles real infrastructure with your configuration.

Terraform compares the **desired state** (your config files) with the **current state** (state file) and performs only necessary actions.

<details>

<summary>Purpose of <code>terraform.tfstate.backup</code> file</summary>

The primary purpose of the `terraform.tfstate.backup` file is to serve as an immediate "undo" button for your state file. If your main state file becomes corrupted or is lost during an operation, this file allows you to recover the state so Terraform can continue managing your resources.

**How the Backup File is Created**

Whenever you run a command that writes to the state (like `terraform apply` or `terraform refresh`), Terraform performs the following operations in order:

1. It reads the current `terraform.tfstate`.
2. It writes that **existing** data to `terraform.tfstate.backup`.
3. It performs the operations and writes the **new** data to `terraform.tfstate`.

This ensures that `terraform.tfstate.backup` always contains the state exactly as it was *before* your most recent command run.

**When to Use It (Recovery Scenarios)**

You would typically use this backup file in the following situations:

* **Corruption:** The `terraform apply` process crashed or was interrupted mid-write, leaving the `terraform.tfstate` file empty or corrupted (invalid JSON).
* **Accidental Deletion:** You accidentally deleted the `terraform.tfstate` file locally.
* **Mistaken State Changes:** You ran a `terraform state rm` or `terraform import` command that messed up your state mapping, and you want to revert to the mapping you had 5 minutes ago.

**How to Recover**

To "activate" the backup, you simply rename the files:

1. Move/Rename the corrupted `terraform.tfstate` (e.g., to `terraform.tfstate.corrupted`).
2. Rename `terraform.tfstate.backup` to `terraform.tfstate`.
3. Run `terraform plan`. Terraform will now see the old state and compare it to your real-world infrastructure.

**Important Caveat: "The Gap"**

Because the backup file represents the state *before* the last run, it does not know about resources that were created *during* the last run (the one that failed or corrupted the file).

Example:

1. You have a backup.
2. You run `terraform apply` to create an AWS S3 Bucket.
3. The S3 Bucket is created in AWS.
4. Terraform crashes before writing the new state file.

If you restore the backup, Terraform **does not know that** S3 Bucket exists. If you run `terraform apply` again, it might try to create the bucket again and fail (because the name is taken), or create a duplicate resource. You would likely need to use `terraform import` to bring that "orphaned" resource into your restored state.

**Best Practice Note**

While `terraform.tfstate.backup` is useful for local development, it is not a robust disaster recovery strategy for production.

* **Production Standard:** Use a **Remote Backend** (like AWS S3 with DynamoDB, or Terraform Cloud). These support **State Versioning**, allowing you to download *any* historical version of your state file, not just the most recent previous one.

</details>

#### Primary commands

***

### Terraform Settings Block

In a configuration file, you typically specify the Terraform version and required provider versions:

```hcl
terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}
```

The **provider** block configures how Terraform authenticates and interacts with the AWS API:

```hcl
provider "aws" {
  region = "us-east-1"
}
```

***

### Input Variables and Outputs

#### Input Variables

Variables allow you to avoid hardcoded values and reuse configurations.

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FplyweulkgCHwj3IBU0ZM%2FBildschirmfoto%202025-07-05%20um%2020.24.42.png?alt=media&#x26;token=9a0ae3fe-0925-4379-89b1-95ad95adb738" alt=""><figcaption></figcaption></figure>

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FMFUiEtxj66y7waUuJzJP%2FBildschirmfoto%202025-07-05%20um%2020.29.14.png?alt=media&#x26;token=b4c54bd3-dd3a-4bc5-84d1-dad20f8a1185" alt=""><figcaption></figcaption></figure>

And then use them in your code:

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FOQsm9eF4x0ajlWyBZTIt%2FBildschirmfoto%202025-07-05%20um%2020.30.11.png?alt=media&#x26;token=d758f223-a79f-4452-b1ae-9817450f2851" alt=""><figcaption></figcaption></figure>

You may define variables in the same configuration file or create a separate file.

**Declaring variables in a separate file (e.g. `variables.tf`)**

```hcl
variable "instance_type" {
  type        = string
  default     = "t2.micro"
  description = "EC2 instance type"
}
```

If there is no default value specified in the configuration file, terraform will ask you to provide when running `terraform plan` or `terraform apply commands`. Alternatively, you can either use this:

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FEDQnyZsXWZktAV024WPx%2FBildschirmfoto%202025-07-05%20um%2020.31.07.png?alt=media&#x26;token=5d606b6d-793d-42d1-a8f8-d2c52a9f1369" alt=""><figcaption></figcaption></figure>

or more conveniently, create a **`.tfvars`** file:

<figure><img src="https://2332658533-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FG5fhKjYnbaQlTPTcaO85%2Fuploads%2FQRnjoNeRvHJCJtgX8IXP%2FBildschirmfoto%202025-07-05%20um%2020.34.06.png?alt=media&#x26;token=92ca3f8f-ef18-4963-83fd-be60265d5e12" alt=""><figcaption></figcaption></figure>

\
Run Terraform normally, and **`.tfvars`** files will be automatically loaded.

Terraform will update the values when running `terraform apply` by extracting values of the variables to update the configuration.&#x20;

Note about adding quotation marks around variable in the `terraform apply` command:

* Most variable values passed via CLI flags **do not need quotation marks**, especially when they are:
  * simple, for example: `terraform apply -var name=prod -var count=3 -var enabled=true`. If the command includes:
    * simple strings without spaces
    * numbers
    * booleans
    * identifiers
* You need quotes **if the value contains spaces, special characters, or shell-interpreted symbols.**
  * Values with spaces
    * `terraform apply -var "description=Production environment"`
  * Values with special characters
    * `terraform apply -var "description=Production environment"`
  * JSON values must be quoted
    * `terraform apply -var 'config={"region":"us-east-1","size":"large"}'`

#### Outputs

You can define outputs too.

Now any resource that you create will contain a list of characteristics or attributes. For example, you can check the documentation of the EC2 instance. You can see the list of all of its attributes, such as the id instance, its ARN, which stands for Amazon Resource Name, and its public IP address. In some cases, you might want to export these attributes. For example, you could print them in the command line, use them in other parts of your infrastructure, or reference them in other Terraform workspaces. To do this, you need to declare them as output values.

Outputs allow you to expose values from your resources:

```hcl
output "server_id" {
  value = aws_instance.web.id
}

output "server_arn" {
  value = aws_instance.web_server.arn
}

output "public_ip" {
  value = aws_instance.web.public_ip
}
```

Retrieve them:

```bash
terraform output server_id
```

***

### Data Types

<https://developer.hashicorp.com/terraform/language/expressions/types>

***

### Structuring your project

A clean Terraform project is usually organized in separate files:

```
terraform/
  main.tf
  variables.tf
  outputs.tf
  providers.tf
  network.tf
```

Terraform automatically loads all `*.tf` files in the directory as a **single configuration**. Splitting files improves readability and maintainability but does not change functionality.

Terraform can automatically concatenate all the files with `.tf` as if you’ve written them in one file.

***

### Data Sources

Data blocks to reference resources created outside Terraform or in another Terraform workspace.&#x20;

You can use data blocks in your configuration file to reference resources created outside of Terraform or in another Terraform workspace, i.e. **query** existing infrastructure instead of creating it.

Terraform refers to these resources as data sources. You can check the provider documentation in the [Terraform registry](https://registry.terraform.io/) to see how to declare these resources.

You'll notice that for each resource available in the provider, you can declare it in Terraform either as a resource or as a data source depending on whether you want to create that resource or read from an external resource.

Unlike regular Terraform resources that create and manage infrastructure, data sources only read information.

They're defined using the `data` block:

**Example: Fetching the latest Ubuntu AMI**

```hcl
data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }
}
```

Use the value:

```hcl
resource "aws_instance" "web" {
  ami = data.aws_ami.ubuntu.id
}
```

**Another example:**

```hcl
# data source
data "aws_subnet" "selected_subnet" {
  id = "subnet-0a4518da5927f157e"
}

# resources
resource "aws_instance" "webserver" {
  ami           = "ami-0453ec754f44f9a4a"
  instance_type = "t2.micro"
  subnet_id     = data.aws_subnet.selected_subnet.id
  
  tags = {
    Name = var.server_name
  }
}
```

**Common Use Cases**

**Reference Existing Infrastructure**: Query resources created outside Terraform, like VPCs created manually or by other teams. You can then use these in your configuration without importing them into your state.

**Dynamic Configuration**: Automatically discover the latest AMI IDs, availability zones, or IP ranges instead of hardcoding values. This keeps your configuration flexible and up-to-date.

**Cross-Stack References**: Access outputs from other Terraform workspaces or state files using the `terraform_remote_state` data source, enabling modular infrastructure design.

**Validation and Dependencies**: Ensure required resources exist before creating dependent resources. For example, verifying a security group exists before launching instances.

**Multi-Region Deployments**: Discover region-specific information like available instance types or services without maintaining separate configurations.

***

### Modules

[Docs](https://developer.hashicorp.com/terraform/language/modules/configuration)

A module is a reusable, isolated collection of Terraform files. Every Terraform project implicitly has a **root module**; any folder with `.tf` files can be used as a module. They are used to create reusable components, improve organization, and treat pieces of infrastructure as a cohesive unit.

**Module structure:**

```
modules/
  vpc/
    main.tf
    variables.tf
    outputs.tf
```

Modules represent the *real power* of Terraform.

**Creating a Terraform Module**

**Problem**

You need to create a reusable Terraform module for a specific set of resources to be used in various environments or projects.

**Solution**

Here’s an example of how to create a basic Terraform module for provisioning an AWS EC2 instance:

```hcl
# File: main.tf

# Define the EC2 instance resource
resource "aws_instance" "example" {
  ami           = var.ami
  instance_type = var.instance_type

  tags = {
    Name = var.instance_name
  }
}

# File: variables.tf

# Define input variables for the module
variable "ami" {
  description = "The AMI to use for the EC2 instance"
  type        = string
}

variable "instance_type" {
  description = "The type of EC2 instance to launch"
  type        = string
  default     = "t2.micro"
}

variable "instance_name" {
  description = "The Name tag for the EC2 instance"
  type        = string
}

# File: outputs.tf

# Define outputs from the module
output "instance_id" {
  description = "The ID of the instance"
  value       = aws_instance.example.id
}

output "instance_public_ip" {
  description = "The public IP address of the instance"
  value       = aws_instance.example.public_ip
}

# File: root module (using the ec2_instance module)

module "ec2_instance" {
  source        = "./my_module"
  ami           = "ami-abc123"
  instance_type = "t2.micro"
  instance_name = "my-instance"
}

output "instance_id" {
  value = module.ec2_instance.instance_id
}

output "instance_public_ip" {
  value = module.ec2_instance.instance_public_ip
}
```

**Key points to remember when creating modules:**

* Modules should be focused on a specific task or group of related resources.
* Use variables to make your module flexible and reusable across different environments.
* Provide useful outputs that allow users of your module to access important information about the created resources.
* Document your module by including a [README.md](http://readme.md/) file that explains what the module does, its inputs, outputs, and any other relevant information.
* Consider versioning your modules, especially if they’re shared across teams or projects.

For testing modules, you can create a test directory in your module with example configurations that use the module. This allows you to verify that the module works as expected and provides examples for users of your module:

```
my_module/
├── main.tf
├── variables.tf
├── outputs.tf
├── README.md
└── tests/
└── example_usage.tf
```

By creating well-structured, reusable modules, you can significantly improve the maintainability and consistency of your Terraform configurations across different projects and environments.

***

### Searching for AMIs

***

### Idempotency of `terraform init` and Terraform's lock file

The terraform binary contains the basic functionality for Terraform, but it does not come with the code for any of the providers (e.g., the AWS Provider, Azure provider, GCP provider, etc.), so when you’re first starting to use Terraform, you need to run `terraform init` to tell Terraform to scan the code, figure out which providers you’re using, and download the code for them. By default, the provider code will be downloaded into a `.terraform` folder, which is Terraform’s scratch directory (you may want to add it to `.gitignore`). Terraform will also record information about the provider code it downloaded into a `.terraform.lock.hcl` file. Just be aware that you need to run `init` anytime you start with new Terraform code and that it’s safe to run init multiple times (the command is idempotent).

**Provider installation and `.terraform` folder**

`terraform init`:

* downloads providers
* sets up backends
* creates `.terraform.lock.hcl` to lock exact provider versions
* creates `.terraform/` as a scratch directory (ignored in Git)

#### How to create and use `.gitignore` files with Terraform

<https://spacelift.io/blog/terraform-gitignore>

***

### Running User Data on EC2 Instances

Terraform allows you to pass initialization scripts:

```hcl
resource "aws_instance" "example" {
  ami                    = "ami-0fb653ca2d3203ac1"
  instance_type          = "t2.micro"
  vpc_security_group_ids = [aws_security_group.instance.id]

  user_data = <<-EOF
    #!/bin/bash
    echo "Hello, World" > index.html
    nohup busybox httpd -f -p 8080 &
  EOF

  user_data_replace_on_change = true

  tags = {
    Name = "terraform-example"
  }
}
```

Notes:

* `<<-EOF` is **heredoc syntax** for multiline strings.
* `user_data_replace_on_change = true` ensures instance replacement so updated user data runs.

Security reminder: Ports <1024 require root; use >1024 for non-root processes (ex: `8080`).

You need to do one more thing before this web server works. By default, AWS does not allow any incoming or outgoing traffic from an EC2 Instance. To allow the EC2 Instance to receive traffic on port 8080, you need to create a security group:

```hcl
resource "aws_security_group" "instance" {
  name = "terraform-example-instance"

  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
```

***

### Committing Terraform Code

Best practice `.gitignore`:

```
.terraform/
*.tfstate
*.tfstate.backup
```

Do **commit**:

* all `.tf` files
* `.terraform.lock.hcl`
* `.gitignore`

Do **not** commit:

* `.terraform/`
* state files

***

### **Terraform state**

[Docs](https://developer.hashicorp.com/terraform/language/state)

[Command](https://developer.hashicorp.com/terraform/cli/commands/state) to modify state since it's heavily discouraged to edit it directly.

**What is Terraform state?**

Every time you run Terraform, it records information about what infrastructure it created in a **Terraform state file**. By default, when you run Terraform in the folder `/foo/bar`, Terraform creates the file **`/foo/bar/terraform.tfstate`**. This file contains a custom JSON format that records a mapping from the Terraform resources in your configuration files to the representation of those resources in the real world.

#### Understanding Terraform State

Terraform tracks infrastructure through a state file that maps your configuration to real-world resources. By default, this creates a `terraform.tfstate` file in JSON format in your working directory.

#### **Challenges with Team Collaboration**

When working in teams, local state files create three critical problems:

**1. Shared Storage**

Team members need access to the same state files to collaborate effectively. Local storage prevents this coordination.

**2. Locking**

Without proper locking mechanisms, concurrent Terraform operations can cause race conditions, leading to conflicts, data loss, and state file corruption.

**3. Isolation**

Different environments (development, staging, production) should use separate state files to prevent accidental changes to production infrastructure.

***

#### Solution: Remote Backends

Remote backends solve all three collaboration challenges by storing state files in centralized locations.

**Supported Backends**

* Amazon S3
* Azure Storage
* Google Cloud Storage
* HashiCorp Terraform Cloud/Enterprise

#### Benefits of Remote Backends

**Automation**: Terraform automatically loads and saves state from the remote backend during `plan` and `apply` operations, eliminating manual errors.

**Built-in Locking**: Most remote backends support native locking. When one user runs `apply`, others must wait. Use `-lock-timeout=<TIME>` to specify how long to wait for lock release (e.g., `-lock-timeout=10m`).

**Security**: Remote backends provide encryption in transit and at rest, plus access control through IAM policies or equivalent mechanisms, protecting sensitive data in state files.

#### Best Practices

**Never Edit State Files Manually**: The state file format is a private API for Terraform's internal use only. Use `terraform import` or `terraform state` commands if you need to manipulate state.

**Use S3 for AWS**: When working with AWS infrastructure, Amazon S3 is the recommended remote backend choice.

#### State File Isolation Strategies

**Workspaces**: Use Terraform workspaces for environment separation within the same configuration.

**File Layout**: Organize separate directories and state files for different environments or components.

**Remote State Data Source**: Use `terraform_remote_state` data source to reference outputs from other state files.

***

### Managing sensitive information

<https://developer.hashicorp.com/terraform/language/manage-sensitive-data#hide-sensitive-variables-and-outputs>

***

### Authentication in Terraform

[Comprehensive Guide to Authenticating to AWS on the Command Line](https://www.gruntwork.io/blog/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line)

To allow Terraform to make changes in your AWS account, you must provide AWS credentials via environment variables. These should belong to an IAM user or role with appropriate permissions.

#### **Setting AWS Credentials (macOS/Linux)**

```bash
export AWS_ACCESS_KEY_ID="<your access key id>"
export AWS_SECRET_ACCESS_KEY="<your secret access key>"
```

#### **Setting AWS Credentials (Windows CMD)**

```cmd
set AWS_ACCESS_KEY_ID=<your access key id>
set AWS_SECRET_ACCESS_KEY=<your secret access key>
```

> **Note:** These environment variables only apply to the current shell session. You must re-export them in any new terminal unless you persist them in your shell profile.

**Other AWS Authentication Options**

In addition to environment variables, Terraform supports the same authentication mechanisms as all AWS CLI and SDK tools. Therefore, it’ll also be able to use credentials in `$HOME/.aws/credentials`, which are automatically generated if you run `aws configure`, or IAM roles, which you can add to almost any resource in AWS.

***

### **Graph Visualization**

`terraform graph` outputs a DOT graph describing dependencies between resources.\
This can be rendered via tools like Graphviz locally or online.

***

### Validate and test your infrastructure

<https://developer.hashicorp.com/terraform/language/validate>

Useful commands:

* `terraform validate` :\
  checks for syntax errors.
* `terraform fmt -recursive` :\
  formats every `.tf` file in the project.

***

### Terraform Cloud

<https://spacelift.io/blog/what-is-terraform-cloud>

***

### 🛡 Handling Secrets in Terraform

**❌ DO NOT put secrets in:**

* variables
* .tfvars
* hardcoded HCL
* outputs
* state files

Terraform state is **not encrypted** by default.

#### ✔️ Secrets Best Practices

**1. Use environment variables**

```bash
export TF_VAR_db_password="supersecret"
```

**2. Use cloud-native secrets managers:**

* AWS Secrets Manager + data sources
* AWS SSM Parameter Store
* GCP Secret Manager
* Azure Key Vault
* HashiCorp Vault (most advanced)

Example with AWS SSM:

```hcl
data "aws_ssm_parameter" "db_password" {
  name = "/prod/db/password"
  with_decryption = true
}
```

**3. Use Terraform Cloud’s secure variables (built-in encryption)**

<https://spacelift.io/blog/what-is-terraform-cloud>&#x20;

**4. Use `sops` + `.tfvars.json` encrypted files**

(very popular in Kubernetes and GitOps setups)

***
