Upgrading OpenSSH to OpenSSH_8.0p1

wget -c https://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-8.0p1.tar.gz

tar -xzf openssh-8.0p1.tar.gz

Install dependencies “yum install gcc” & “yum install openssl-devel”

cd openssh-8.0p1

./configure

make

make install

Verify the new version ssh -V

OpenSSH user enumeration vulnerability (CVE-2018-15473)

An OpenSSH user enumeration vulnerability (CVE-2018-15473) became public via a GitHub commit. This vulnerability does not produce a list of valid usernames, but it does allow guessing of usernames.

By sending a malformed public key authentication message to an OpenSSH server, the existence of a particular username can be ascertained. If the user does not exist, an authentication failure message will be sent to the client. In case the user exists, failure to parse the message will abort the communication: the connection will be closed without sending back any message.

How to detect if a host is targeted by this attack? Search for this type of event:

fatal: ssh_packet_get_string: incomplete message [preauth]

keep an eye on your log files and block suspicious IP addresses that make too many SSH attempts (check with your firewall logs).

Affected Software/OS:
OpenSSH version 7.7 and prior on Windows.

Solution:
Update to version 7.8 or later.

How to use CI/CD using Jenkins in Google Kubernatives Engine.

Note: Test everything before using it.


Create an organization in Google Cloud Create a project under that organization
Main Steps: Package and deploy the app on google kubernatives engine. a) Package the app into a Docker image b) Run the container c) Upload image to a registry d) Create container cluster e) Deploy the app to the Cluster f) Expose the app to the Internet g) Scare up the deployment if required. h) Deploy new version using CI/CD – Jenkins
step1:
setting up project Enable the Google cloud API’s using the script attached. File Name: enable_api.sh
Step 2: Build the container image:
export PROJECT_ID=[projectid] docker build -t gcr.io/${PROJECT_ID}/myapp:v1
#verify the command is successfull docker images
step3: Upload the container image:
configure docker to authenticate to container registry. gcloud auth configure-docker
upload the image to container registry docker push gcr.io/$PROJECT_ID/myapp:v1
step4: Deploy the application kubectl create deployment myapp –image=gcr.io/$PROJECT_ID/myapp:v1
step5: Expose the app to Internet
kubectl expose deployment myapp -type=loadbalancer –port 80 –target-port 8080
step6: scale up the application kubectl scale deployment myapp –replicas=3
step7: use rolling update to the existing deployment with a new version kubectl set image deployment/myapp myapp=gcr.io/$PROJECT_ID/myapp:v2
CI/CD STEPS:
Push docker image to google container registry using Jenkins
step1 #Install
required Jenkins Plugins
Google Oauth credentials Plugins
Docker pipeline Plugins
Google container registry Auth Plugins
step 2 #Create
service account in GCP
Attach roles – storage – storage admin & storage object viewer
step 3 #Jenkins – credentials- global credentials – Add credentials Select kind : Google service account from private key
we can use the json file which we get when we create a service account
step 4: jenkins project > configure > Build > docker build and publish
The above steps helps to pull the code and push image to GCR repository.
The script kube_deploy.sh will deploy the image to kubernatives cluster running on GCP

Please use and comment in the github repo – https://github.com/linuxmaster007/GKE.git

How to Pass the Google Cloud Associate Engineer Exam.

I thought of writing my experience so it will help others who are aspiring to take this google certification. First and foremost thing is that never depend just on the practice questions that is available online. Hands on experience is very important because google had created the question pattern is such a way that you may get confused with the options if you don’t have hands on experience. You won’t find the similar set of questions asked in the exam anywhere online. So spend more time for hands on lab and make sure you are thorough with each and every command and its use cases.

The Associate Cloud Engineer exam checks mainly below expertise:

  • Set up a cloud solution environment
  • Plan and configure a cloud solution
  • Deploy and implement a cloud solution
  • Ensure the successful operation of a cloud solution
  • Configure access and security

Exam Details:

  • Number of Questions: 50
  • Exam Duration: 120 min (2 Hrs)
  • Exam Format: Multiple choice and Multiple select.
  • Registration Fee: $125 USD
  • Exam Language: English, Japanese, Spanish, Portuguese, French, German

Preparation for the exam:

I work on google cloud but still I have gone through the course in linux academy which I find very useful and helped me passing this certification exam. Linuxacademy have a very good course for google cloud associate exam and also there is a practice exam at the end of linux academy course which is very useful. I will say that you might be ready if you get more than 95 % in that practice test. Google also have some practice test https://cloud.google.com/certification/practice-exam/cloud-engineer

I have also gone through the practice questions in braincert and whizlabs which I feel helped me in increasing my confidence.

You should have a proper understanding of the below topics, in order to pass the Google Cloud Certified Associate Engineer exam.

BILLING

https://cloud.google.com/billing/docs/how-to/budgets

IAM

Lot of questions from IAM, least privilege principle, filter roles, billing roles,compute engine roles,storage account roles etc

https://cloud.google.com/iam/docs/service-accounts

https://cloud.google.com/compute/docs/access/service-accounts#computeenginedefaultserviceaccount

https://cloud.google.com/iam/docs/understanding-roles

https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

https://cloud.google.com/iam/reference/rest/v1/Policy

APPENGINE

Split traffic between diff versions, Deploy applications using CLI, region, zone.

https://cloud.google.com/sdk/gcloud/reference/app/deploy

https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/list

https://cloud.google.com/appengine/docs/standard/php/an-overview-of-app-engine#limits

COMPUTE ENGINE

how to set idle max nodes, autoscaling, gcloud commands, provisioning VM instances, deployment manager

https://cloud.google.com/sdk/gcloud/reference/config/set

https://cloud.google.com/compute/docs/startupscript

https://cloud.google.com/compute/docs/storing-retrieving-metadata

https://cloud.google.com/compute/docs/machine-types

https://cloud.google.com/compute/docs/disks/scheduled-snapshots

https://cloud.google.com/compute/docs/instance-groups/#autohealing

VPC

Configure firewall, expand IP rance, subnet, CIDR

https://cloud.google.com/vpc/docs/using-vpc

https://cloud.google.com/vpc/docs/firewalls

https://cloud.google.com/compute/docs/ip-addresses/

https://cloud.google.com/load-balancing/

https://cloud.google.com/load-balancing/docs/choosing-load-balancer

https://cloud.google.com/router/docs/

PROJECTS

creating projects, linking projects with billing account, create and manage projects using cli, describe existing configurations and activate configurations

CLOUD STORAGE

Diff storage levels, commands like gsutil, automatic deletion of objects, lifecycle policies

https://cloud.google.com/storage/docs/storage-classes

KUBERNETES ENGINE

Deployment process of docker file, activating configuration and describe configuration, kubectl commands, autoscaling, clusternodes and services, understanding container registry

https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset

https://cloud.google.com/kubernetes-engine/docs/concepts/pod

https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

https://cloud.google.com/sdk/gcloud/reference/container/clusters/create

https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
https://cloud.google.com/kubernetes-engine/docs/quickstart
https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
https://cloud.google.com/kubernetes-engine/quotas
https://cloud.google.com/kubernetes-engine/docs/troubleshooting

BIGDATA

dataproc,pub/sub,data analytics

STACKDRIVER

logging and monitoring metrics, export to bigquery

https://cloud.google.com/error-reporting/
https://cloud.google.com/logging/
https://cloud.google.com/profiler/
https://cloud.google.com/debugger/
https://cloud.google.com/trace/
https://cloud.google.com/logging/docs/audit/

I wish good luck to everyone who is preparing for this certification.

Know Your Server Load Average

Load

The load average is shown in many different graphical and terminal utilities, including in the top command. However, the easiest, most standardized way to see your load average is to run the uptime command in a terminal. This command shows your computer’s load average as well as how long it’s been powered on.

load average readout:

load average: 1.05, 0.70, 5.09

From left to right, these numbers show you the average load over the last one minute, the last five minutes, and the last fifteen minutes. In other words, the above output means:

load average over the last 1 minute: 1.05

load average over the last 5 minutes: 0.70

load average over the last 15 minutes: 5.09

What Does it means ?

Assuming you’re using a single-CPU system, the numbers tell us that:

over the last 1 minute: The computer was overloaded by 5% on average. On average, .05 processes were waiting for the CPU. (1.05)

over the last 5 minutes: The CPU idled for 30% of the time. (0.70)

over the last 15 minutes: The computer was overloaded by 409% on average. On average, 4.09 processes were waiting for the CPU. (5.09)

if you have a load average of 2 on a single-CPU system, this means your system was overloaded by 100 percent — the entire period of time, one process was using the CPU while one other process was waiting. On a system with two CPUs, this would be complete usage — two different processes were using two different CPUs the entire time. On a system with four CPUs, this would be half usage — two processes were using two CPUs, while two CPUs were sitting idle.

To understand the load average number, you need to know how many CPUs your system has. A load average of 6.03 would indicate a system with a single CPU was massively overloaded, but it would be fine on a computer with 8 CPUs.

POODLE ATTACK

POODLE attack is an exploit that Will take advantage of the way some browsers deal with encryption. POODLE (Padding Oracle On Downgraded Legacy Encryption) is the name of the vulnerability that enables the exploit.

POODLE can be used to target browser-based communication that relies on the Secure Sockets Layer (SSL) 3.0 protocol for encryption and authentication. The Transport Layer Security (TLS) protocol has largely replaced SSL for secure communication on the Internet, but many browsers will revert to SSL 3.0 when a TLS connection is unavailable. An attacker who wants to exploit POODLE takes advantage of this by inserting himself into the communication session and forcing the browser to use SSL 3.0.

Apple, Google, Mozilla and Microsoft have all announced plans to stop supporting SSL 3.0 in the near future.

Test using https://.poodletest.com

If you see a poodle, you have some cleaning up to do.

poodle

What Does POODLE Do?

POODLE tries to force the connection between your web browser and the server to downgrade to SSLv3. If it does that, the attacker can get the plain text information from the communication. That means that they can access cookies which are often used to store information.

LOOKER TOOL INSTALLATION ERROR

NotImplementedError: getppid unsupported or native support failed to load
ppid at org/jruby/RubyProcess.java:752
ppid at org/jruby/RubyProcess.java:749

To troubleshoot this issue.

First check the libcrypt is installed correctly.

ldconfig -p | grep libcrypt

If the path is correct then check whether /tmp is mounted properly.
mount command will give you the details about the mounting. if there is noexec option for /tmp mount then this will not work.

noexec – Do not allow direct execution of any binaries on the mounted filesystem

So remove the noexec option from /etc/fstab and remount as given below:

mount -o remount /tmp

This issue will be fixed.

content delivery network (CDN )

Concept:

• A content delivery network (CDN) is an interconnected system of computers on the Internet that provides Web content rapidly to numerous users by duplicating the content on multiple servers and directing the content to users based on proximity. (Server closest to that user )

Non CDN Image && With CDN Image
CDN

How it works :
• CDN works by caching static content such as images, CSS, JS and videos for a website all around the web.

Example :
• If a visitor was in India and the web server was in the United States, each request made by the visitor must travel from India to the United States. Although each request may only be a few hundred milliseconds, a site that requests dozen or different requests can increase a sites overall load time. With CDN implementation , user in India accessing a server in the United States and then getting the static content from a nearest CDN node.( Nearest node )

Benefits:
• Decrease Server Load
• Faster Content Delivery
• Improving page load times

How to test :
• Enable Fire Bug on Fire fox browser
• Open fire bug add-on and click on “Net” tab.
• Then under “Net” tab click on “All” and refresh the page.
• Now check the domain for CSS, Images, JavaScript files, it should be CDN.
• (You can also check it on separate tabs for CSS, JavaScript, Images )

What to test :
• Check functionalities of you respective modules, it should not break.
• Check images, CSS , JS are loading properly

Concerns :
• On each deployment of CSS, Images , JS files to CDN it may not reflect the changes immediately , in that case we have to refresh the deployed file on CDN .

Disk Space Cleanup Shell Script

This is clean up script which you can use as a standard. Test it from your end before running it in production servers:

dir_cleanup.sh

start_dte=`date ‘+%X %x’`
this_server=$(uname -a |cut -d” ” -f2)

# command line

if [ $# -ne 1 ]
then
echo “Usage: $0
exit 1
fi

env_file=$1

# Source in the Env file
. ${env_file}

#########################################################################
# function to cleanup the directory
#########################################################################

function do_cleanup
{
#
# Remove older than $bkp_d days
#

for dir in $bkp_dir1;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d1 -type f -name ‘*’ -print -exec rm -f {} \;
done

for dir in $bkp_dir2;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d2 -type f -name ‘*’ -print -exec rm -f {} \;
done

for dir in $bkp_arch;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d4 -type f -name ‘*’ -print -exec rm -f {} \;
done

for dir in $bkp_log_dir;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d3 -type f -name ‘*’ -print -exec rm -f {} \;
done

}

#########################################################################
# Call function to cleanup the directory
#########################################################################

do_cleanup

echo Stop $0 `date ‘+%X %x’`

The env file for the path which need to be cleaned:

dir_cleanup.env

bkp_d1=35
bkp_d2=45
bkp_d3=10
bkp_d4=50

bkp_dir1=”
/path/

#bkp_arch=””

Execute the above in cron as given below:

/home/dir_cleanup.sh /home/cdv/env/dir_cleanup.env > /var/logs/dir_cleanup_log.$(date +’%Y%m%d_%H%M%S’)