Thursday, October 21, 2010

Tech Cheatsheet

Apache: mod_rewrite simulator

Apache: Turn off server signature for production
ServerSignature Off
ServerTokens Prod

AWK: Remove duplicate lines from a file:
bash> awk '!x[$0]++' file.txt

AWS: CLI: Environment variables
AWS: ELB: Converting certificate private key into PEM format acceptable by ELB
bash> openssl rsa -in my-openssl-pk -outform PEM > my-openssl-pk.pem
AWS: S3: URL for S3 buckets

AWS: S3: IAM policy for granting full access to a single bucket from a specific IP range
"Statement": [
"Effect": "Allow",
"Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*",
"Condition": {
"IpAddress": { "aws:SourceIp": ["", ""] }
"Effect": "Allow",
"Action": "s3:*",
"Resource": [ "arn:aws:s3:::bucket_name", "arn:aws:s3:::bucket_name/*"],
"Condition": {
"IpAddress": { "aws:SourceIp": ["", ""] }

AWS: S3: Bucket policy for granting full access to a single bucket from a specific IP range
"Statement": [
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"],
"Resource": [ "arn:aws:s3:::*" ],
"Condition": {
"IpAddress": { "aws:SourceIp": ["", ""] }
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [ "arn:aws:s3:::bucket_name", "arn:aws:s3:::bucket_name/*"],
"Condition": {
"IpAddress": { "aws:SourceIp": ["", ""] }

AWS: RDS: MySQL: Import Issue
ERROR 1227 (42000) at line X: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
If you get this error first in the parameter group for the RDS you're importing into try setting "log_bin_trust_function_creators = 1". If that does not help than the DEFINER statement in your trigger definition is wrong. Open the sql file with vim and go to line X. See what DEFINER statement looks like and either correct it so it matches the user you're importing as or just delete it completely (:%s/DEFINER=`user`@`host`//g). The import should work now.

AWS: RDS: MySQL: Grant all privileges to DB user
Generally "GRANT ALL ON *.*" on RDS will fail because the root account does not have SUPER user privileges. MySQL however allows the use of `%` or "_" as wildcards for the database, which will allow GRANT on all of the user-created databases and tables.
mysql> GRANT ALL ON `%`.* TO user@'%' IDENTIFIED BY 'password'; 

AWS: RDS: MySQL: Rotate mysql.general_log and mysql.slow_log table
mysql> CALL mysql.rds_rotate_general_log;
mysql> CALL mysql.rds_rotate_general_log;

AWS: RDS: MySQL: Skip a SQL operation if replication gets stuck
1. Connect with mysql command to slave
2. CALL mysql.rds_skip_repl_error;

Git: Permanently cache credentials
# for buildmachines it's useful to permanently cache the git service account
git config --global credential.helper store

Haproxy: Setup logging properly through rsyslogd
# haproxy.cfg
log /dev/log local1 info
# rsyslog.conf
# if haproxy logs are also being written to /var/log/messages you can exclude them
*.info;mail.none;authpriv.none;cron.none;local1.none            /var/log/messages
# rsyslog.d/10-haproxy.conf
$AddUnixListenSocket /var/lib/haproxy/dev/log # if haproxy is being chrooted to /var/lib/haproxy, run: mkdir /var/lib/haproxy/dev
if $programname startswith 'haproxy' then /var/log/haproxy.log

Haproxy: Maintenance mode on demand
# haproxy.cfg
stats socket /var/lib/haproxy/stats mode 600 level admin
bash> mkdir -p /var/lib/haproxy/
# install socat tool
bash> echo "disable server <backend>/<server-name>" | socat stdio /var/lib/haproxy/stats
bash> echo "enable server <backend>/<server-name>" | socat stdio /var/lib/haproxy/stats

LDAP: ldapsearch
bash> ldapsearch -x -h -D "CN=My Name,OU=Mailboxes,DC=company,DC=com" -W -b 'CN=John Doe,OU=Mailboxes,DC=company,dc=com';

bash> ldapsearch -x -h -D "username" -W password -b "OU=Users, OU=Myorg, dc=mydomain, dc=com";

Linux: rsyslog: disable rate limit
# /etc/rsyslog.conf
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0

Linux: Sysctl
Enable VM address randomization and make it harder to exploit buffer overflows
kernel.randomize_va_space = 2 # in /etc/sysctl.conf

Linux: IPTABLES: Connection throttling
Drop incoming connections which make more than 5 connection attempts on port 22 within 60 seconds:

bash> iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --set
bash> iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 5 -j DROP 

MSSQL: Find out what IP address you're connecting from
SELECT client_net_address FROM sys.dm_exec_connections WHERE session_id = @@spid

MySQL: Obtain a copy of the database for setting up a replica (with minimal locking)
bash> mysqldump --databases --master-data --routines --single-transaction my_db_name > my_db_name.sql

MySQL: Make a backup without locking the database (for InnoDB engine only)
bash> mysqldump --single-transaction --routines my_db_name > my_db_name.sql

MySQL: Skip a SQL operation if replication gets stuck
Connect with mysql command to slave
mysql> STOP SLAVE;

OpenSSH: Generate Public key from a Private key
bash> ssh-keygen -y -f id_rsa >

OpenSSL: Generate Self-Signed SSL certificate
bash> openssl req -x509 -newkey rsa:2048 -nodes -keyout key.pem -out cert.pem -days 1234

OpenSSL: Encrypt a string using AES-128-CBC and base64 encode it
bash> openssl aes-128-cbc -K <encryption_key_hex> -iv <initialization_vector_hex> -in file.orig -a

Splunk: UF: Renaming Hosts
Method 1: To change the host name reported on the Splunk Web UI, on the forwarding agent edit the following file "/opt/splunkforwarder/etc/system/local/inputs.conf" then restart the splunk agent.
Method 2: After changing the hostname for a machine or before making an AMI make sure to stop splunk forwarder and run:
bash> splunk clone-prep-clear-config

Splunk: UF: Check Status of Universal Forwarder
On the machine running the Splunk Universal Forwarder open a browser and go to:

Or from the command line:
/opt/splunkforwarder/bin/splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus 

Splunk: UF: Reindex all files on a host
1. On any Search Head - delete the respective data from the indexers first, otherwise there will be duplicates after the reindex; log in as admin and pipe the search results you want gone to the delete command (i.e. sourceytpe=balh | delete), make sure to do it for "All time" time period
2. On the machine with the Universal Forwarder - delete the fishbucket: rm -rf /opt/splunkforwarder/var/lib/splunk/fishbucket && /etc/init.d/splunk restart

If files monitored by splunk UF have not had any logs in them in the last few hours you might need to "echo "test" >> monitored_log_file" before step 2 above will work

Splunk: UF: Autoscaling
When running the Splunk UF on an ASG you can't use the IP address or hostname of the instances for controlling things on the Splunk deployment server, instead you can use the clientName. Here's now you set it up on the UF

# add below lines to /opt/splunkforwarder/etc/system/local/deploymentclient.conf
clientName = my-host-name

TAR: find output
bash> tar cf search_results.tar $(find . -name pattern*);

TAR: copy directory to another server and preserve permissions
bash> tar cf - /my/dir | ssh user@host tar xf - -C /your/dir

TeamPass: Custom improvements
> ZeroClipboard.setMoviePath("<?php echo $_SESSION['settings']['cpassman_url'];?>/includes/js/zeroclipboard/ZeroClipboard.swf");
< ZeroClipboard.setMoviePath("");
> id="pw_size" value="8" 
> id="edit_pw_size" value="8" 
< id="pw_size" value="16" 
< id="edit_pw_size" value="16"

Ubuntu: Disable upstart job from running at boot time
### echo 'manual' > /etc/init/SERVICE.override
bash> echo 'manual' > /etc/init/rpcbind.override

Ubuntu: Upstart init script template
description "node"
author ""

respawn limit 20 5

start on runlevel [2345]
stop on runlevel [^2345]

# set limit on number of opened files
limit nofile 65535 65535

   exec sudo -u www-data NODE_ENV=prod /usr/bin/nodejs /var/www/server.js >> /var/log/nodejs.log 2>&1
end script

Ubuntu: Systemv init script template

# Provides: my_service
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Description: Start my_service at boot time


case $1 in
echo "Starting $SERVICE"
echo "Stopping $SERVICE"
echo "Restart $SERVICE"
echo "$SERVICE Status"
pidof $CMD
echo "Invalid Option"
echo "Valid Options: start|stop|restart|status"

To enable my_service to start/stop automatically run the following:
bash> update-rc.d my_service defaults
bash> update-rc.d my_service enable

Ubuntu: Duo Authentication
# create /etc/apt/sources.list.d/duosecurity.list containing the line below
deb trusty main

# run the following 2 commands
curl -s | sudo apt-key add -
apt-get update && apt-get install duo-unix

# edit /etc/pam.d/common-auth and add the following line AFTER line
auth requisite /lib64/security/

### edit /etc/ssh/sshd_config
UsePAM yes
ChallengeResponseAuthentication yes
UseDNS no

# decide what to do about pubkey authentication as if that succeeds SSH skips PAM
# add below to the end of sshd_config to allow ssh keys only from restricted networks
PubkeyAuthentication no
Match Address,
PubkeyAuthentication yes
bash> service ssh restart

Varnish: Clear Cache
  1. On the box where varnish is running, enter varnishadm
    varnishadm -T
  2. Run the purge command with the regexp of the URL
    purge.url /wp-content/themes
For older varnish versions the config file needs to be modified according to this:
After that you can send PURGE requests from telnet.

VLC: capture one frame (25fps) from a webcam and save it in png
bash> vlc v4l2:// --vout=dummy --aout=dummy --intf=dummy --video-filter=scene  --scene-format=png --scene-ratio=25 --scene-width=384 --scene-height=288 --run-time=1 --scene-prefix=frame --scene-path=/path/vlc-capture/ vlc://quit

Convert png to gd2 for nagios:
bash> pngtogd2 image.png image.gd2 0 1;

0 = chunk size
1 = no compression (raw)

Ettercap arp mitm between gw and target and save traffic to a file:
bash> ettercap -Tq -M arp -i eth0 -w traffic.out / /;

-T = text
-q = quiet
-i = interface
-w = file
/ = gw
/ = target

1. bash> echo "1" > /proc/sys/net/ipv4/ip_forward; # enable IP forwarding
2.bash> iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 10000; # redirect all tcp traffic from port 80 to localhost port 10000 (to sslstrip)
3. run sslstrip
4.bash> ettercap -T -M arp -i eth0 -o / /; # perform arp poisoning only

bash> find / -name -exec ls -l '{}' ';'
bash> find / -name -exec ls -l {} \;
bash> find / -type f -mtime +4w -exec rm -rf {} \; # delete all files older than 4 weeks
bash> find / -name -exec cat {} >>out \; # output content of found files to file out
bash> tar cf archive.tar $(find . -name myfiles*); # this will tar all myfiles found by `find` into archive.tar

SSH: Socks proxy over ssh
bash> ssh -C -D 1080 user@host

set browser to use as socks server and all web traffic will be going through the ssh tunnel

bash> pppd silent pty 'ssh user@host -t "pppd ipcp-accept-local ipcp-accept-remote"';

this will create a ppp tunnel with IPs between localhost and host you ssh into

bash> VAL=`sed -n 's/.*PATTERN \([A-Z][A-Z][A-Z]\).*/\1/p' test.txt`; #this will search for 'PATTERN ABC' and return 'ABC'

bash> sed 's/text\([A-Z].*\)text/\1/' test.txt; # group needed match with () then reuseit with \1 OR reuse entire match with &

example: if test.txt contains textHELLOtext, running the above command will output: HELLO or textHELLOtext if you use & instead of \1

Increment dates by 1 year
bash> exiftool "-alldates-=1:00:00 00:00:00" picture.jpg

No comments: