An OpenSSH user enumeration vulnerability (CVE-2018-15473) became public via a GitHub commit. This vulnerability does not produce a list of valid usernames, but it does allow guessing of usernames.
By sending a malformed public key authentication message to an OpenSSH server, the existence of a particular username can be ascertained. If the user does not exist, an authentication failure message will be sent to the client. In case the user exists, failure to parse the message will abort the communication: the connection will be closed without sending back any message.
How to detect if a host is targeted by this attack? Search for this type of event:
I thought of writing my experience so it will help others who are aspiring to take this google certification. First and foremost thing is that never depend just on the practice questions that is available online. Hands on experience is very important because google had created the question pattern is such a way that you may get confused with the options if you don’t have hands on experience. You won’t find the similar set of questions asked in the exam anywhere online. So spend more time for hands on lab and make sure you are thorough with each and every command and its use cases.
The Associate Cloud Engineer exam checks mainly below expertise:
Set up a cloud solution environment
Plan and configure a cloud solution
Deploy and implement a cloud solution
Ensure the successful operation of a cloud solution
Configure access and security
Exam Details:
Number of Questions: 50
Exam Duration: 120 min (2 Hrs)
Exam Format: Multiple choice and Multiple select.
Registration Fee: $125 USD
Exam Language: English, Japanese, Spanish, Portuguese, French, German
Preparation for the exam:
I work on google cloud but still I have gone through the course in linux academy which I find very useful and helped me passing this certification exam. Linuxacademy have a very good course for google cloud associate exam and also there is a practice exam at the end of linux academy course which is very useful. I will say that you might be ready if you get more than 95 % in that practice test. Google also have some practice test https://cloud.google.com/certification/practice-exam/cloud-engineer
I have also gone through the practice questions in braincert and whizlabs which I feel helped me in increasing my confidence.
You should have a proper understanding of the below topics, in order to pass the Google Cloud Certified Associate Engineer exam.
creating projects, linking projects with billing account, create and manage projects using cli, describe existing configurations and activate configurations
CLOUD STORAGE
Diff storage levels, commands like gsutil, automatic deletion of objects, lifecycle policies
Deployment process of docker file, activating configuration and describe configuration, kubectl commands, autoscaling, clusternodes and services, understanding container registry
The load average is shown in many different graphical and terminal utilities, including in the top command. However, the easiest, most standardized way to see your load average is to run the uptime command in a terminal. This command shows your computer’s load average as well as how long it’s been powered on.
load average readout:
load average: 1.05, 0.70, 5.09
From left to right, these numbers show you the average load over the last one minute, the last five minutes, and the last fifteen minutes. In other words, the above output means:
load average over the last 1 minute: 1.05
load average over the last 5 minutes: 0.70
load average over the last 15 minutes: 5.09
What Does it means ?
Assuming you’re using a single-CPU system, the numbers tell us that:
over the last 1 minute: The computer was overloaded by 5% on average. On average, .05 processes were waiting for the CPU. (1.05)
over the last 5 minutes: The CPU idled for 30% of the time. (0.70)
over the last 15 minutes: The computer was overloaded by 409% on average. On average, 4.09 processes were waiting for the CPU. (5.09)
if you have a load average of 2 on a single-CPU system, this means your system was overloaded by 100 percent — the entire period of time, one process was using the CPU while one other process was waiting. On a system with two CPUs, this would be complete usage — two different processes were using two different CPUs the entire time. On a system with four CPUs, this would be half usage — two processes were using two CPUs, while two CPUs were sitting idle.
To understand the load average number, you need to know how many CPUs your system has. A load average of 6.03 would indicate a system with a single CPU was massively overloaded, but it would be fine on a computer with 8 CPUs.
POODLE attack is an exploit that Will take advantage of the way some browsers deal with encryption. POODLE (Padding Oracle On Downgraded Legacy Encryption) is the name of the vulnerability that enables the exploit.
POODLE can be used to target browser-based communication that relies on the Secure Sockets Layer (SSL) 3.0 protocol for encryption and authentication. The Transport Layer Security (TLS) protocol has largely replaced SSL for secure communication on the Internet, but many browsers will revert to SSL 3.0 when a TLS connection is unavailable. An attacker who wants to exploit POODLE takes advantage of this by inserting himself into the communication session and forcing the browser to use SSL 3.0.
Apple, Google, Mozilla and Microsoft have all announced plans to stop supporting SSL 3.0 in the near future.
Test using https://.poodletest.com
If you see a poodle, you have some cleaning up to do.
What Does POODLE Do?
POODLE tries to force the connection between your web browser and the server to downgrade to SSLv3. If it does that, the attacker can get the plain text information from the communication. That means that they can access cookies which are often used to store information.
NotImplementedError: getppid unsupported or native support failed to load
ppid at org/jruby/RubyProcess.java:752
ppid at org/jruby/RubyProcess.java:749
To troubleshoot this issue.
First check the libcrypt is installed correctly.
ldconfig -p | grep libcrypt
If the path is correct then check whether /tmp is mounted properly.
mount command will give you the details about the mounting. if there is noexec option for /tmp mount then this will not work.
noexec – Do not allow direct execution of any binaries on the mounted filesystem
So remove the noexec option from /etc/fstab and remount as given below:
• A content delivery network (CDN) is an interconnected system of computers on the Internet that provides Web content rapidly to numerous users by duplicating the content on multiple servers and directing the content to users based on proximity. (Server closest to that user )
Non CDN Image && With CDN Image
How it works :
• CDN works by caching static content such as images, CSS, JS and videos for a website all around the web.
Example :
• If a visitor was in India and the web server was in the United States, each request made by the visitor must travel from India to the United States. Although each request may only be a few hundred milliseconds, a site that requests dozen or different requests can increase a sites overall load time. With CDN implementation , user in India accessing a server in the United States and then getting the static content from a nearest CDN node.( Nearest node )
Benefits:
• Decrease Server Load
• Faster Content Delivery
• Improving page load times
How to test :
• Enable Fire Bug on Fire fox browser
• Open fire bug add-on and click on “Net” tab.
• Then under “Net” tab click on “All” and refresh the page.
• Now check the domain for CSS, Images, JavaScript files, it should be CDN.
• (You can also check it on separate tabs for CSS, JavaScript, Images )
What to test :
• Check functionalities of you respective modules, it should not break.
• Check images, CSS , JS are loading properly
Concerns :
• On each deployment of CSS, Images , JS files to CDN it may not reflect the changes immediately , in that case we have to refresh the deployed file on CDN .
This is clean up script which you can use as a standard. Test it from your end before running it in production servers:
dir_cleanup.sh
start_dte=`date ‘+%X %x’`
this_server=$(uname -a |cut -d” ” -f2)
# command line
if [ $# -ne 1 ]
then
echo “Usage: $0 ”
exit 1
fi
env_file=$1
# Source in the Env file
. ${env_file}
#########################################################################
# function to cleanup the directory
#########################################################################
function do_cleanup
{
#
# Remove older than $bkp_d days
#
for dir in $bkp_dir1;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d1 -type f -name ‘*’ -print -exec rm -f {} \;
done
for dir in $bkp_dir2;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d2 -type f -name ‘*’ -print -exec rm -f {} \;
done
for dir in $bkp_arch;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d4 -type f -name ‘*’ -print -exec rm -f {} \;
done
for dir in $bkp_log_dir;
do
echo “Clean $dir”
find $dir -mtime +$bkp_d3 -type f -name ‘*’ -print -exec rm -f {} \;
done
}
#########################################################################
# Call function to cleanup the directory
#########################################################################
do_cleanup
echo Stop $0 `date ‘+%X %x’`
The env file for the path which need to be cleaned: