17 Sep 2018
One of the best free tools out there for VMware is RVTools by Rob de Veij at Robware.
If you haven’t use RVTools, it is a Windows app that will connect to a specified vCenter and display almost all the inventory information about the contents of that vCenter. The data can be reviewed in the app or even better, exported to Excel. Anytime I am working on a project with VMware, I use RVtools to gather information about a client’s environment for leading into a design, point in time view before a cutover, and more. Of all the tools and scripts I use, RVTools is the one I use the most. Now that I’m working with AWS, GCP, and other cloud environments, I wish there was a similar tool or script that would capture and export practically everything about them.
If you work with VMware and haven’t used RVTools before, go try it!
One thing I recently discovered, is RVTools can be scripted. This is very useful when a client has multiple vCenters and you want to set names on the exported files. One client I’ve worked with, does a daily RVTools run so they have a copy of what their VMware environment looked like every day. While there are many ways to track infrastructure changes, this is a very simple one that anyone can do.
This is the below PowerShell code I use whenever I need to connect to a vCenter and export the data.
#List of vCenters
$vCenters = "vcenter1.venting.cloud","vcenter2.venting.cloud","vcenter3.venting.cloud","vcenter4.venting.cloud"#creds
$User = "venting.cloud\user"
$password = "secret!"
#where the Excel files will be saved
$outputDIR = "Z:\Projects\venting\"
foreach ($vCenter in $vCenters) {
#setting up the arguments to be used for RVTools. This sets the name to the vCenter, and a good date format.
$outputFile = $vCenter + "-RVToolsExport-" + $(get-date -uformat %m-%d-%Y-%H-%m-%S) + ".xlsx"
#combine the variables into one argument string
$Arguments = "-u $User -p $password -s $vCenter -c ExportAll2xlsx -d $outputDIR -f $outputFile"
#starts the export process by calling RVTools and passes in the arguments.
Start-Process -FilePath "C:\Program Files (x86)\Robware\RVTools\RVTools.exe" -ArgumentList $Arguments -NoNewWindow -Wait -PassThru
}
There are other CLI options as well for RVTools, that are listed in their documentation.
31 Jul 2018
Terraform has a cool feature called profiles which can be very useful depending on how you are using Terraform.
Terraform Profiles provide a way to have resources use different providers in your code and be in the same code base. One use case is with AWS where you may have one AWS Account that is the primary account for resources, and another account for Security. With multiple Terraform scripts it is possible to manage those different Accounts separately, but with Profiles we can manage them with the same set of code.
AWS CLI Setup
The AWS CLI first has to be configured with the different profiles. When the AWS CLI is installed, one of the first steps that is done, is to run
aws configure
This command starts a wizard that will ask you for the AWS Access Key ID, the AWS Secret Access Key, the default region, and the default output format. Once this has been run, two text files are created in the default location of $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows, called “config” and “credentials”
Lets open these with your favorite text editor of choice.
Here is what my profile and credential files look like.
The default section in each file is created by the aws configure command, while the named profiles were manually created. In our environment, the default and cdl profile are the primary lab AWS account, and the cdlsecurity is the lab security account.
Terraform Code
In the Terraform code, working with multiple providers is very much the same as normal, except there will be multiple provider definitions, and some resources will have an extra profile statement with them. Here is our sample provider definition:
provider "aws" {
region = "us-west-2"
}
provider "aws" {
region = "us-west-2"
alias = "cdlsecurity" #The Terraform Alias that is used by resources
profile = "cdlsecurity" #The AWS CLI Profile that is mapped
}
Our first provider is going to use the default aws cli profile for the Access Key and the Secret, while the second one has a Terraform alias of cdlsecurity, while the profile it is mapped to the cdlsecurity profile in the config for the Access Key and the Secret.
Here is what our resource definitions look like with a S3 bucket in each account.
resource "aws_s3_bucket" "normalbucket" {
bucket = "normalbucket"
acl = "private"
}
resource "aws_s3_bucket" "cdlbucket" {
bucket = "cdlbucket"
acl = "private"
provider = "aws.cdlsecurity" #The non-default provider we want to use
}
The first bucket is created with the default AWS profile, while the second, uses the alias awscdlsecurity. The “provider” entry tells Terraform, that instead of using the default aws provider, that it should use the other one that was defined in the provider definition.
Run a Terraform init, plan and apply and you will have two S3 buckets in two separate accounts.
As always with almost anything in IT, there are a lot of ways to do things. Terraform has a number of ways to handle credentials with providers and working with multiple accounts. This is just one way to do it, that you may find handy in the future.