Category Archives: Brain dump

Git Tips and Tricks

!!! WORK IN PROGRESS !!!

Some useful commands

Setup git to use vscode has the default editor

Run this command and then add this lines:

$ git config --global -e
[core]
	editor = code --wait

[diff]
	tool = default-difftool

[difftool "default-difftool"]
  cmd = code --wait --diff $LOCAL $REMOTE

[difftool]
	prompt = false

[merge]
	tool = code

[merge "tool"]
	cmd = "code --wait $MERGED"
	prompt = false
	keepbackup = false

How to compare (diff) files from two different branches

git difftool mybranch anotherbranch -- myfile.txt

or

git diff branch1:file branch2:file

commit –amend

Edit a commit before push it, example:

$ touch test.txt
$ git add test.txt
$ git commit -m "Test file"
[master 964fa35] Test file
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 test.txt
$ git log --oneline
964fa35 (HEAD -> master)Test file
$ touch test2.txt
$ git add test2.txt 
$ git commit --amend -m "Test and Test2 file"
$ git log --oneline
414a06e (HEAD -> master) test and test2

git log

git log shows the current HEAD and its ancestry. That is, it prints the commit HEAD points to, then its parent, its parent, and so on.

$ git log --oneline
$ git log HEAD@{1.hour.ago}
$ git log HEAD@{1.week.ago}

git reflog

git reflog doesn’t traverse HEAD’s ancestry at all. The reflog is an ordered list of the commits that HEAD has pointed to: it’s undo history for your repo. The reflog isn’t part of the repo itself (it’s stored separately to the commits themselves) and isn’t included in pushes, fetches or clones; it’s purely local.

$ git reflog HEAD@{1.hour.ago} --oneline

How to clear unreachable commits

$ git reflog expire --expire-unreachable=now --all
$ git gc --prune=now

Gcloud Command Reference

Gcloud Command Reference

Some trick configuration

To set the project property in the core section, run:

$ gcloud config set project myProject

To set the zone property in the compute section, run:

$ gcloud config set compute/zone us-central1-a

To disable prompting for scripting, run:

$ gcloud config set disable_prompts true

To set a proxy with the appropriate type, and specify the address and port on which to reach it, run:

$ gcloud config set proxy/type http
$ gcloud config set proxy/address 1.234.56.78
$ gcloud config set proxy/port 8080

Show your running instances

$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
test3-jump-001 us-central1-a f1-micro 10.200.1.2 200.200.200.17 RUNNING
test3-node-001 us-central1-a n1-standard-1 10.200.252.6 RUNNING
test3-node-002 us-central1-a n1-standard-1 10.200.252.2 RUNNING
test3-node-003 us-central1-a n1-standard-1 10.200.252.3 RUNNING
test3-node-004 us-central1-a n1-standard-1 10.200.252.4 RUNNING
test3-node-005 us-central1-a n1-standard-1 10.200.252.5 RUNNING

Working with Buckets

To set create a new bucket, run:

$ gsutil mb gs://mySuperNewBucket

To enable versioning on it, run:

$ gsutil versioning set on gs://mySuperNewBucket

To remove all files on the bucket

$ gsutil rm -r gs://mySuperNewBucket

To remove the bucket (needs to be empty)

$ gsutil rb gs://mySuperNewBucket

Listing zones 

To show all available zones for your account, run:

$ gcloud compute zones list
NAME REGION STATUS NEXT_MAINTENANCE TURNDOWN_DATE
us-east1-b us-east1 UP
us-east1-c us-east1 UP
us-east1-d us-east1 UP
us-east4-c us-east4 UP
us-east4-b us-east4 UP
us-east4-a us-east4 UP
us-central1-c us-central1 UP
us-central1-a us-central1 UP

Listing machine-types and show CPU memory sizes

You could filter per region including a filter on it, example:

$ gcloud compute machine-types list --filter="zone ~ ^us-central1-a"
NAME ZONE CPUS MEMORY_GB DEPRECATED
f1-micro us-central1-a 1 0.60
g1-small us-central1-a 1 1.70
n1-highcpu-16 us-central1-a 16 14.40
n1-highcpu-2 us-central1-a 2 1.80
n1-highcpu-32 us-central1-a 32 28.80
n1-highcpu-4 us-central1-a 4 3.60
n1-highcpu-64 us-central1-a 64 57.60
n1-highcpu-8 us-central1-a 8 7.20
n1-highcpu-96 us-central1-a 96 86.40
n1-highmem-16 us-central1-a 16 104.00
n1-highmem-2 us-central1-a 2 13.00
n1-highmem-32 us-central1-a 32 208.00
n1-highmem-4 us-central1-a 4 26.00
n1-highmem-64 us-central1-a 64 416.00
n1-highmem-8 us-central1-a 8 52.00
n1-highmem-96 us-central1-a 96 624.00
n1-standard-1 us-central1-a 1 3.75
n1-standard-16 us-central1-a 16 60.00
n1-standard-2 us-central1-a 2 7.50
n1-standard-32 us-central1-a 32 120.00
n1-standard-4 us-central1-a 4 15.00
n1-standard-64 us-central1-a 64 240.00
n1-standard-8 us-central1-a 8 30.00
n1-standard-96 us-central1-a 96 360.00

Connect to a compute instance using the gcloud credentials

To access one instance through your gcloud credentials, run:

$ gcloud compute --project "myCoolProjectName" ssh --zone "us-east1-b" "myCoolInstanceName"

Correct Terminal prompt wrapping

Sometimes I get a wrong wrapping when using terminal via ssh and I find out is mostly to do with the size of the window assumed by the terminal is not the same as your actual window size.
E.g.:
Wrong Wrapping

To solve this:

$ shopt checkwinsize
And check if the checkwinsize is eanbled, if not:

$ shopt -s checkwinsize
Then just run another command or resize your window size.

You could put this command on your ~/.bashrc or /etc/bashrc

VMWare ESXi command-line

List all vms

vim-cmd vmsvc/getallvms

Check the power state of the virtual machine with the command:

vim-cmd vmsvc/power.getstate <vmid>

Power-on the virtual machine with the command:

vim-cmd vmsvc/power.on <vmid>

Retrieving Networking Information

esxcli network ip interface list
esxcli network ip interface ipv4 get -i vmk<X>

Register a VMX file to a VM

vim-cmd solo/registervm /vmfs/volumes/datastore_name/VM_directory/VM_name.vmx

Reply Question after register a VM from a VMX file

vim-cmd vmsvc/message <vmid>
vim-cmd vmsvc/message <vmid> <question_id> <answer_id>

vSwitch

How to Configure VMware vSwitch from ESX / ESXi Command Line

Terraform Tips

This is just a brain dump to myself 🙂

I’m moving some functions from Ansible to Terraform and this is some basic commands:

How to find the CentOS 7 official AMI Image

aws ec2 describe-images \
    --owners 'aws-marketplace' \
    --filters 'Name=product-code,Values=aw0evgkw8e5c1q413zgy5pjce' \
    --query 'sort_by(Images, &CreationDate)[-1].[ImageId]' \
    --region 'eu-central-1' \
    --output 'text'

How to find the Ubuntu official Image

Ubuntu AWS ami Locator: https://cloud-images.ubuntu.com/locator/ec2/

AWS Marketplace (Filtered by SO and Free): https://aws.amazon.com/marketplace/

instance.tf

terraform apply # Will create all machines on instance.tf
terraform destroy # Destroy managed infrastructure
terraform plan # Will show but not apply
terraform plan -out exampletest.out #
terraform appy exampletest.out #

terraform import aws_instance.example i-abcd1234

This command locates the AWS instance with ID i-abcd1234 and attaches its existing settings, as described by the EC2 API, to the name aws_instance.example in the Terraform state.

Variables how it works

IPSecVPN Flush and reset the Tunnels – Fortigate

Sometimes there were some issues with IPSec VPN tunnels on fortigate. Here some commands to clear the SA Sessions.

List the Tunnel VPN:

diagnose vpn tunnel list | grep name

Choose the name that you want to reset

diag vpn tunnel flush *Tunnel_NAME*
diag vpn tunnel reset *Tunnel_NAME*

If this not works clear the sessions on firewall:
Create a filter which the IP that you want to clear.

diagnose sys session filter dst *IP_THAT_IS_STUCK*

Show if the filter shows the correct lines:

diagnose sys session filter

If is everything ok, clear the session:

diagnose sys session clear

Then flush and reset the VPN again (In both sides)

Troubleshooting Fortigate Firewall Policies

For a simple and fast “debug” you could use the diagnose command:
example:

diagnose sniffer packet any "(host {IP1_TO_DEBUG} and host {IP2_TO_DEBUG}) and icmp" 4

If you need more details, use diag debug:

diag debug enable 
diag debug flow filter add {IP_TO_DEBUG}
diag debug flow show console enable
diag debug flow trace start 100          <== this will display 100 packets for this flow

To stop all other debug, type:

diag debug flow trace stop

or

diad debug disable

Hortonworks Hadoop tuning

Tez

tez.task.resource.memory.mb
tez.am.resource.memory.mb

MapReduce2

MR Map Java Heap Size
MR Reduce Java Heap Size
MR AppMaster Java Heap Size

Yarn

yarn.scheduler.capacity.maximum-am-resource-percent=80 (this is a MUST)

  • Memory:
    • Node
  • Container:
    • Minimum container size
    • Maxmum containcer size

Hive

  • Tez:
    • Tez Container Size
    • Hold containers to reduce latency = true
    • Number of containers held = 10
    • Memory (For Map Join)
  • hive-site:
    • set hive.execution.engine=tez;
      set hive.vectorized.execution.reduce.enabled = true;
      set hive.vectorized.execution.enabled = true;
      set hive.cbo.enable=true;
      set hive.compute.query.using.stats=true;
      set hive.stats.fetch.column.stats=true;
      set hive.stats.fetch.partition.stats=true;

Yarn

  • Memory:
    • Node
  • Container:
    • Minimum container size
    • Maxmum containcer size

Sqoop (Use ORC to improve performance )

# import
mysql -h $myhost -u $myuser -p$mypass $mydb -e 'show tables' | awk -v myuser="$myuser" -v mypass="$mypass" -v mydb="$mydb" -v myhost="$myhost" '{ print "sqoop import --connect jdbc:mysql://"myhost"/"mydb" --username "myuser" --password "mypass" -m 1 --table "$1" --hcatalog-database "mydb" --hcatalog-table "$1" --create-hcatalog-table --hcatalog-storage-stanza \"stored as orcfile\""}' | bash

 

List all cron jobs for all users

#!/bin/bash
for user in $(cut -f1 -d: /etc/passwd)
	do echo $user && crontab -u $user -l | egrep -v '^#' | sed 's/^/	/'
	echo " "
done

How use Sqoop to import and append to Hive

How use Sqoop to import and append to Hive

# Import from mysql to hive

sqoop import --connect jdbc:mysql://<HOST>/<DB> \
  --username <MYUSER> \
  --password <MYPASS> \
  --table <MYTABLE> \
  --hive-import --hive-table <DBMAMEONHIVE>.<TABLE> \
  --fields-terminated-by ','

# Change the hive table to external (will be stored on HDFS)

alter table <TABLE NAME> SET TBLPROPERTIES('EXTERNAL'='TRUE')

# Verify were the table are stored on HDFS looking for the Location field.

DESCRIBE FORMATTED <TABLE NAME>

# Than now you can import and append with sqoop direct to the hdfs file which will reflect direct on the external table.

sqoop import --connect jdbc:mysql://<HOST>/ \
  --username <MYUSER> \
  --password <MYPASS> \
  --table <MYTABLE> \
  --target-dir '<HDFS_LOCATION_OUTPUT>' --incremental append --check-column '<PRIMARY_KEY_COLUMN>' --last-value <LAST_VALUE_IMPORTED>