# AWS CLI crash course

***

### AWS CLI

If you are coming from GCP (`gcloud`), the AWS CLI (`aws`) will feel similar but slightly more "raw." In GCP, the tool tries to be helpful (managing SSH keys for you). In AWS, you generally have to be more explicit.

#### The "Grammar" of the CLI

The structure is very consistent but slightly different from Google's.

`aws [SERVICE] [ACTION] [FLAGS]`

* Service: The product (e.g., `s3`, `ec2`, `iam`, `lambda`).
* Action: What you want to do (e.g., `ls`, `describe-instances`, `create-user`).
* Flags: Specifics like region or profiles.

Example:

* *English:* "List all my S3 buckets."
* *CLI:* `aws s3 ls`

#### Getting Started (The Setup)

To authenticate, you usually generate an Access Key ID and Secret Access Key in the IAM console first. Then run:

* aws configure

  This wizard will ask for four things:

  1. AWS Access Key ID: (e.g., `AKIA IOSF ODNN 7EXAMPLE`)
  2. AWS Secret Access Key: (e.g., `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`)
  3. Default region name: (e.g., `us-east-1` or `us-west-2`)
  4. Default output format: (Recommend: `json`)
* `aws sts get-caller-identity`

  This is the "Who am I?" command. It confirms you are logged in and shows which account/user you are acting as.

#### The "Must-Know" Commands

**A. Storage (S3)**

This is the easiest part of the AWS CLI because the commands mimic standard Linux commands (`ls`, `cp`, `mv`).

* List buckets:

  `aws s3 ls`
* List files in a bucket:

  `aws s3 ls s3://my-data-lake/`
* Copy a file (upload):

  `aws s3 cp ./local-file.csv s3://my-data-lake/raw/`
* Recursive copy (Folder upload/download):

  `aws s3 cp ./local-folder s3://my-data-lake/raw/ --recursive`
* Make a bucket:

  `aws s3 mb s3://new-bucket-name`

**B. Compute (EC2)**

This is where AWS feels different from GCP.

* List instances:

  `aws ec2 describe-instances`

  (Warning: This outputs a HUGE block of JSON describing every detail of the server.)
* Start/Stop a server:

  `aws ec2 start-instances --instance-ids i-1234567890abcdef0`

  `aws ec2 stop-instances --instance-ids i-1234567890abcdef0`

**C. The "SSH" Difference (Crucial!)**

In GCP, you run `gcloud compute ssh [NAME],` and it magically handles keys.

In AWS, you cannot do that by default. You must do it the "Linux" way:

1. When you create the EC2 instance, you download a `.pem` file (your key).
2. You must protect that key: `chmod 400 my-key.pem`
3. You connect using standard SSH:

   `ssh -i my-key.pem ec2-user@[PUBLIC-IP-ADDRESS]`

*(Note: There is a newer tool called "Instance Connect," but the method above is still what 90% of engineers use.)*

#### Advanced: Filtering the Noise

Because AWS outputs so much JSON data (especially for EC2), you need to know how to filter it using `--query`. This uses a syntax called JMESPath.

* Example: "Just give me the Instance ID and IP address of my servers, not the whole book."

  ```bash
  aws ec2 describe-instances \
  --query "Reservations[*].Instances[*].{ID:InstanceId, IP:PublicIpAddress}"
  ```

#### Summary Table: GCP vs. AWS Commands

| **Action**     | **GCP Command (gcloud)**        | **AWS Command (aws)**         |
| -------------- | ------------------------------- | ----------------------------- |
| Auth           | `gcloud auth login`             | `aws configure`               |
| Check Identity | `gcloud config list`            | `aws sts get-caller-identity` |
| List Buckets   | `gcloud storage ls`             | `aws s3 ls`                   |
| Copy File      | `gcloud storage cp [src] [dst]` | `aws s3 cp [src] [dst]`       |
| List VM Info   | `gcloud compute instances list` | `aws ec2 describe-instances`  |
| SSH            | `gcloud compute ssh [name]`     | `ssh -i key.pem user@ip`      |

***

***
