IPSecVPN Flush and reset the Tunnels – Fortigate

Sometimes there were some issues with IPSec VPN tunnels on fortigate. Here some commands to clear the SA Sessions.

List the Tunnel VPN:

diagnose vpn tunnel list | grep name

Choose the name that you want to reset

diag vpn tunnel flush *Tunnel_NAME*
diag vpn tunnel reset *Tunnel_NAME*

If this not works clear the sessions on firewall:
Create a filter which the IP that you want to clear.

diagnose sys session filter dst *IP_THAT_IS_STUCK*

Show if the filter shows the correct lines:

diagnose sys session filter

If is everything ok, clear the session:

diagnose sys session clear

Then flush and reset the VPN again (In both sides)

Troubleshooting Fortigate Firewall Policies

For a simple and fast “debug” you could use the diagnose command:
example:

diagnose sniffer packet any "(host {IP1_TO_DEBUG} and host {IP2_TO_DEBUG}) and icmp" 4

If you need more details, use diag debug:

diag debug enable 
diag debug flow filter add {IP_TO_DEBUG}
diag debug flow show console enable
diag debug flow trace start 100          <== this will display 100 packets for this flow

To stop all other debug, type:

diag debug flow trace stop

or

diad debug disable

Kubernetes useful commands

!!This is a draft document!!

Minikube Setup Commands

Linux: curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version
Linux: curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
minikube start
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube  --type=NodePort
kubectl get pod
curl $(minikube service hello-minikube --url)
kubectl delete deployment hello-minikube
minikube stop

Basic Kubectl Commands

kubectl get pods
kubectl get pods [pod name]
kubectl expose  <identifier/name> [—port=external port] [—target-port=container-port [—type=service-type]
kubectl port-forward  [LOCAL_PORT:]REMOTE_PORT]
kubectl attach  -c 
kubectl exec  [-it]  [-c CONTAINER] — COMMAND [args…]
kubectl label [—overwrite]  KEY_1=VAL_1 ….
kubectl run  —image=image

Scaling Commands

kubectl scale —replicas=4 deployment/tomcat-deployment
kubectl expose deployment tomcat-deployment --type=NodePort
kubectl expose deployment tomcat-deployment —type=LoadBalancer —port=8080 —target-port=8080 —name tomcat-load-balancer
kubectl describe services tomcat-load-balancer
kubectl describe services tomcat-load-balancer

Deployments Commands

kubectl get deployments kubectl rollout status kubectl set image kubectl rollout history

Secret Commands

kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD

Hortonworks Hadoop tuning

Tez

tez.task.resource.memory.mb
tez.am.resource.memory.mb

MapReduce2

MR Map Java Heap Size
MR Reduce Java Heap Size
MR AppMaster Java Heap Size

Yarn

yarn.scheduler.capacity.maximum-am-resource-percent=80 (this is a MUST)

  • Memory:
    • Node
  • Container:
    • Minimum container size
    • Maxmum containcer size

Hive

  • Tez:
    • Tez Container Size
    • Hold containers to reduce latency = true
    • Number of containers held = 10
    • Memory (For Map Join)
  • hive-site:
    • set hive.execution.engine=tez;
      set hive.vectorized.execution.reduce.enabled = true;
      set hive.vectorized.execution.enabled = true;
      set hive.cbo.enable=true;
      set hive.compute.query.using.stats=true;
      set hive.stats.fetch.column.stats=true;
      set hive.stats.fetch.partition.stats=true;

Yarn

  • Memory:
    • Node
  • Container:
    • Minimum container size
    • Maxmum containcer size

Sqoop (Use ORC to improve performance )

# import
mysql -h $myhost -u $myuser -p$mypass $mydb -e 'show tables' | awk -v myuser="$myuser" -v mypass="$mypass" -v mydb="$mydb" -v myhost="$myhost" '{ print "sqoop import --connect jdbc:mysql://"myhost"/"mydb" --username "myuser" --password "mypass" -m 1 --table "$1" --hcatalog-database "mydb" --hcatalog-table "$1" --create-hcatalog-table --hcatalog-storage-stanza \"stored as orcfile\""}' | bash

 

How to create a Docker with Jenkins, CaspesJS, PhantomJS and ChromeDriver

Create a Dockerfile with this content:

FROM jenkins

RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && apt-get update && apt-get install -y google-chrome-stable

RUN /usr/local/bin/install-plugins.sh ace-editor  ant antisamy-markup-formatter branch-api build-timeout cloudbees-folder credentials credentials-binding chromedriver  durable-task email-ext external-monitor-job git git-client github github-api github-branch-source github-organization-folder git-server gradle handlebars icon-shim javadoc jquery-detached junit ldap mailer mapdb-api matrix-auth matrix-project momentjs pam-auth pipeline-build-step pipeline-input-step pipeline-rest-api pipeline-stage-step pipeline-stage-view plain-credentials scm-api script-security ssh-credentials ssh-slaves structs subversion timestamper token-macro windows-slaves workflow-aggregator workflow-api workflow-basic-steps workflow-cps workflow-cps-global-lib workflow-durable-task-step workflow-job workflow-multibranch workflow-scm-step workflow-step-api workflow-support ws-cleanup

USER root
 RUN apt-get clean && apt-get update && apt-get install -y npm build-essential chrpath libssl-dev libxft-dev libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev

RUN cd ~ && wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 && tar xvfj phantomjs-2.1.1-linux-x86_64.tar.bz2 && rm phantomjs-2.1.1-linux-x86_64.tar.bz2 && mv phantomjs-2.1.1-linux-x86_64 /usr/local/share && ln -sf /usr/local/share/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin && npm install -g casperjs

Explanation

Dockerfile first RUN

On the first RUN we’ll install the google-chrome for use with the ChromeDriver plugin on Jenkins.

Dockerfile second RUN

Here we’ll installed all the plugins we use on Jenkins.

Dockerfile next RUN

The third RUN we’ll install all packages we’ll need for the last version of phantomjs.

Dockerfile last RUN

In this RUN we download the last version of phantom and install it, then install casperjs using npm.

Build and run the new Jenkins Docker

docker build -t lcjenkins .
docker run --name lcjenkins -p 8888:8080 -p 50000:50000 -v $jhome:/var/jenkins_home tgjenkins

* Change the $jhome for the path you want to have the jenkins home.

Google Cloud gcloud multiple accounts and basic usage

Google Cloud gcloud tool Basics and Multiple Accounts

For use gcloud tool follow this simple steps:

Install the gcloud tool

Visit this page:

https://cloud.google.com/sdk/docs/quickstart-linux

Download the archive file for your system, uncompress it and run:

./google-cloud-sdk/install.sh

And follow the script next steps.

Initial config

gcloud init

And follow the script next steps.

List your instances

With this command you will check if the configuration is working and list your instances.

gcloud compute instances list

Multiple Accounts

You can use multiple accounts running the gcloud init command OR using the command gcloud config set account your@gloudaccount.co
Using the set account, use the gcloud compute instances list to auth your new account and configure your project.

Useful commands

To update all installed components to the latest version:

gcloud components update

Displays all Google Compute Engine images in a project:

gcloud compute images list

To list Google Compute Engine machine types:

gcloud compute machine-types list

To list all addresses in a project in table form, run:

gcloud compute addresses list

It’s a very easy tool 🙂

More info: https://cloud.google.com/sdk/docs/how-to

How use Sqoop to import and append to Hive

How use Sqoop to import and append to Hive

# Import from mysql to hive

sqoop import --connect jdbc:mysql://<HOST>/<DB> \
  --username <MYUSER> \
  --password <MYPASS> \
  --table <MYTABLE> \
  --hive-import --hive-table <DBMAMEONHIVE>.<TABLE> \
  --fields-terminated-by ','

# Change the hive table to external (will be stored on HDFS)

alter table <TABLE NAME> SET TBLPROPERTIES('EXTERNAL'='TRUE')

# Verify were the table are stored on HDFS looking for the Location field.

DESCRIBE FORMATTED <TABLE NAME>

# Than now you can import and append with sqoop direct to the hdfs file which will reflect direct on the external table.

sqoop import --connect jdbc:mysql://<HOST>/ \
  --username <MYUSER> \
  --password <MYPASS> \
  --table <MYTABLE> \
  --target-dir '<HDFS_LOCATION_OUTPUT>' --incremental append --check-column '<PRIMARY_KEY_COLUMN>' --last-value <LAST_VALUE_IMPORTED>

How to set up automatic filesystem checks and repair on Linux

Trigger Automatic Filesystem Check upon Boot
If you want to trigger fsck automatically upon boot, there are distro-specific ways to set up unattended fschk during boot time.

On Debian, Ubuntu or Linux Mint, edit /etc/default/rcS as follows.

$ sudo vi /etc/default/rcS
# automatically repair filesystems with inconsistencies during boot
FSCKFIX=yes
On CentOS, edit /etc/sysconfig/autofsck (or create it if it doesn’t exist) with the following content.

$ sudo vi /etc/sysconfig/autofsck
AUTOFSCK_DEF_CHECK=yes

Force One-Time Filesystem Check on the Next Reboot
If you want to trigger one-time filesystem check on your next reboot, you can use this command.

$ sudo touch /forcefsck