Showing posts with label tool. Show all posts
Showing posts with label tool. Show all posts

Monday, October 15, 2018

Scripting Tmux

Tmux has the super power to run your terminal in splitted windows (tmux panes).

It provides sub-commands to enable scripting. For instance, I would like to launch 4 commands with 4 different panes.


################
# cmd1 | cmd2
#------+--------
# cmd3 | cmd4
################

tmux new-window 'cmd1'
tmux split-window -h 'cmd2'
tmux select-pane -L
tmux split-window -v 'cmd3'
tmux select-pane -R
tmux split-window -v 'cmd4'

This can also be useful when orchestrating SSH sessions across different hosts, or creating multiple SSH tunnels at a time (don't ask me why I'm doing this).

Happy hacking Tmux!

Saturday, September 22, 2018

vlc chromecast shortcut with ranger file manager


I own a Sony TV which has a nice Chromecast built-in feature. So far I've been using VLC's Chromecast renderer for playing the videos.

Today I took a look at ranger file manager custom commands and wrote a shortcut for it. Just to save my life to avoid a few clickings every time I watch anime.


class tv(Command):
    def execute(self):
        tv = "vlc --sout="#chromecast{ip=192.168.xx.xx}" --demux-filter=demux_chromecast"
        filepath = self.fm.thisfile.path
        command = '{} "{}"'.format(tv, filepath)
        self.fm.execute_command(command)

Wednesday, February 14, 2018

Moving Google Drive data via Google Compute Engine

Recently I've been using Google Compute Engine to move some personal data (several TB) from one Google Drive account to another. This post is my experience for moving more than 10TB data.



The rough monthly cost I paid to Google: $75/mo.

Compute Engine Standard Intel N1 1 VCPU running in Americas for 744 hours: $35/mo
Compute Engine Storage PD Capacity: 1024GB Gibibyte-months: $40/mo

Yes, I inserted a 1TB drive for temporary storage so I can move stuff in bulk.

The cool thing is that Google doesn't charge for the egress traffic data from/to Google Drive, which actually saves a lot of money. Saying you got 10TB data to move, according to Google's Internet egress rates, that would be $0.12 * 10 * 1024 = $1228.8 for only the traffic.



I used the Go-written tool called skicka to move my stuff, that I've been using for downloading/uploading files to my Google Drive for a long time.

By default, skicka saves its metadata at ~/.skicka.metadata.cache  and ~/.skicka.tokencache.json, but it's also easy to use a different set of metadata when you have multiple accounts:

skicka -tokencache ~/skic/account1.tokencache.json -metadata-cache-file ~/skic/account1.metadata.cache download /<dir name> <local dir name>
skicka upload <local dir name> <dir name>

I used the commands above to move the files in different pre-organized directories, each has around 500~800GB stuff.



The reason why I'm glad to pay more money for the N1 instance was because of the egress traffic throughput caps. Basically each vCPU has a 2 Gbps egress cap, the more vCPUs you have, the faster the speed would be (up to 16 Gbps). Shared vCPU instances (such as f1-micro or g1-small) are counted as 0.5 vCPU so the egress cap is 1Gbps. This really matters when you have over 10TB data to move around.

Details: https://cloud.google.com/compute/docs/networks-and-firewalls#egress_throughput_caps


My ISP plan has 250Mbps Down / 25Mbps Up. Using Google Compute Engine as the tmp storage just saved lots of time and also avoided the freaking data usage cap enforced by my ISP.



It actually took me several months to move all my data from one Google Drive account to the other (because I was so busy). I really wish I could invest one weekend to write some automation scripts to grab stuff from/to Google Drive but unfortunately I didn't. The total cost ended up to $75*N months but in which I applied some portion of my $300 credits to it, so it turned out not really a bad deal.

Friday, November 13, 2015

gdb pretty-print python path


On some of my dev machines (freaking Ubuntu 14.04), gdb can't find the pretty print python extension.

(^q^) r
Starting program: /home/xatier/xxxxx
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Traceback (most recent call last):
  File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py", line 63, in <module>
    from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'


Let's print python path:


On Ubuntu 14.04
(^q^) python print(sys.path)
['/usr/share/gdb/python', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload', '/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages']

On arch linux
(^q^) python print(sys.path)
['/usr/lib/../share/gcc-5.2.0/python', '/usr/share/gdb/python', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-linux', '/usr/lib/python3.5/lib-dynload', '/usr/lib/python3.5/site-packages']


See? So basically the python scripts are under /usr/share/<gcc-ver>/python

$ ls /usr/share | grep gcc
gcc-4.8


Just add this line to your ~/.gdbinit :

python sys.path.append("/usr/share/gcc-4.8/python")


Now you are good.

(^q^) python print(sys.path)
['/usr/share/gdb/python', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload', '/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages', '/usr/share/gcc-4.8/python']


Friday, July 3, 2015

ffmpeg x264 lossless video encoding

I re-encoded my previous club class video clips for my Google drive quota.

The result of is extremely good.


$ ls -lh
-rw-r----- 1 xatier staff 2.8G Jun 29 07:13 Topic 20 - [Programming 1] Python 1.mov
-rw-r--r-- 1 xatier staff 1.6G Jul 3 00:24 Topic 20 - [Programming 1] Python 1.mp4
-rw-r----- 1 xatier staff 3.1G Jun 29 07:30 Topic 21 - [Programming 2] Python 2.mov
-rw-r--r-- 1 xatier staff 1.4G Jul 3 13:43 Topic 21 - [Programming 2] Python 2.mp4


For the x264 lossless encoding, you can use the following commands:

If you don't have time, use the 'ultrafast' preset:

# fastest encoding
$ ffmpeg -i input -c:v libx264 -preset ultrafast -qp 0 -c:a copy output

If you need a compressed encoding, use the 'veryslow' preset:

# best compression
$ ffmpeg -i input -c:v libx264 -preset veryslow -qp 0 -c:a copy output


Both examples will provide the same quality output. (-qp 0)


It took me over 2 hours for this video on my 2012-mid MacbookAir (1.8 GHz Intel Core i5 / 8 GB 1600 MHz DDR3).


$ ffmpeg -i Topic\ 20\ -\ \[Programming\ 1\]\ Python\ 1.mov  -c:v libx264 -preset veryslow -qp 0 -c:a copy py1.mp4
...
frame=459704 fps= 23 q=-1.0 Lsize= 1660358kB time=02:07:41.73 bitrate=1775.3kbits/s dup=1008 drop=0
video:1387489kB audio:264157kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.527454%


Update: il gnaggnoy remembered not only qp but also both qpmax and qpmin should be set to 0 in order to force a lossless encoding in x264.

Reference:
https://trac.ffmpeg.org/wiki/Encode/H.264
https://wiki.archlinux.org/index.php/FFmpeg



Monday, June 29, 2015

List Google Drive files by size

When you run out of the quota limit of your Google Drive service, you must want to look at which file is occupying your space.

For Google Drive, you can just use this link to see the list of uploaded files by size.

https://drive.google.com/#quota

Alternatively, you can hover over your storage usage in the bottom left corner and click the Google Drive icon, that will lead you to the same link as above.

Friday, June 26, 2015

Currency conversions in Google Spreadsheet

Just learned a tip in Google Spreadsheet.

To get the currency conversion rate between TWD-USD:

=GoogleFinance("CURRENCY:TWDUSD")




That will give you the rate from Google Finance database, history is also available.

=GoogleFinance("currency:USDTWD", "price", today()-10, today())






Other API parameters:
https://support.google.com/docs/answer/3093281


Reference:
http://googledocstips.com/2011/03/09/how-to-calculate-foreign-exchange/
http://stackoverflow.com/questions/20607627/on-google-spreadsheet-how-can-you-query-googlefinance-for-a-past-exchange-rate


Sunday, May 31, 2015

Rip Audio CDs in command line

We basically use two tools: cdparanoia and lame.

cdparanoia: Compact Disc Digital Audio extraction tool https://www.archlinux.org/packages/extra/x86_64/cdparanoia/
lame: A high quality MPEG Audio Layer III (MP3) encoder https://www.archlinux.org/packages/extra/x86_64/lame/


# grab CD information
$ cdparanoia -vsQ
cdparanoia III release 10.2 (September 11, 2008)

Using cdda library version: 10.2
Using paranoia library version: 10.2
Checking /dev/cdrom for cdrom...
Testing /dev/cdrom for SCSI/MMC interface
SG_IO device: /dev/sr0

CDROM model sensed sensed: ASUS DRW-24D1ST 1.00

Checking for SCSI emulation...
Drive is ATAPI (using SG_IO host adaptor emulation)

Checking for MMC style command set...
Drive is MMC style
DMA scatter/gather table entries: 1
table entry size: 131072 bytes
maximum theoretical transfer: 55 sectors
Setting default read size to 27 sectors (63504 bytes).

Verifying CDDA command set...
Expected command set reads OK.

Attempting to set cdrom to full speed...
drive returned OK.

Table of contents (audio tracks only):
track length begin copy pre ch
===========================================================
1. 11033 [02:27.08] 0 [00:00.00] no no 2
2. 18007 [04:00.07] 11033 [02:27.08] no no 2
3. 10100 [02:14.50] 29040 [06:27.15] no no 2
4. 13237 [02:56.37] 39140 [08:41.65] no no 2
5. 9050 [02:00.50] 52377 [11:38.27] no no 2
6. 11076 [02:27.51] 61427 [13:39.02] no no 2
7. 23342 [05:11.17] 72503 [16:06.53] no no 2
8. 13141 [02:55.16] 95845 [21:17.70] no no 2
9. 16703 [03:42.53] 108986 [24:13.11] no no 2
10. 12635 [02:48.35] 125689 [27:55.64] no no 2
TOTAL 138324 [30:44.24] (audio only)





# rip stuffs
$ cdparanoia -B
cdparanoia III release 10.2 (September 11, 2008)


Ripping from sector 0 (track 1 [0:00.00])
to sector 138323 (track 10 [2:48.34])

outputting to track01.cdda.wav

(== PROGRESS == [ | 011032 00 ] == :^D * ==)

outputting to track02.cdda.wav

(== PROGRESS == [ | 029039 00 ] == :^D * ==)

outputting to track03.cdda.wav

(== PROGRESS == [ | 039139 00 ] == :^D * ==)

outputting to track04.cdda.wav

(== PROGRESS == [ | 052376 00 ] == :^D * ==)

outputting to track05.cdda.wav

(== PROGRESS == [ | 061426 00 ] == :^D * ==)


...

outputting to track10.cdda.wav

(== PROGRESS == [ | 138323 00 ] == :^D * ==)

Done.





# convert to mp3 format
$ for i in `ls`; do lame $i; done
LAME 3.99.5 64bits (http://lame.sf.net)
Using polyphase lowpass filter, transition band: 16538 Hz - 17071 Hz
Encoding track01.cdda.wav to track01.cdda.mp3
Encoding as 44.1 kHz j-stereo MPEG-1 Layer III (11x) 128 kbps qval=3
Frame | CPU time/estim | REAL time/estim | play/CPU | ETA
5633/5633 (100%)| 0:04/ 0:04| 0:04/ 0:04| 35.102x| 0:00
-------------------------------------------------------------------------------------
kbps LR MS % long switch short %
128.0 1.8 98.2 99.9 0.0 0.0
Writing LAME Tag...done
ReplayGain: -0.3dB
LAME 3.99.5 64bits (http://lame.sf.net)
Using polyphase lowpass filter, transition band: 16538 Hz - 17071 Hz
Encoding track02.cdda.wav to track02.cdda.mp3
Encoding as 44.1 kHz j-stereo MPEG-1 Layer III (11x) 128 kbps qval=3
Frame | CPU time/estim | REAL time/estim | play/CPU | ETA
9193/9193 (100%)| 0:07/ 0:07| 0:07/ 0:07| 34.246x| 0:00
-------------------------------------------------------------------------------------
kbps LR MS % long switch short %
128.0 0.3 99.7 100.0 0.0 0.0
Writing LAME Tag...done
ReplayGain: +0.1dB

...


done!

Reference: http://www.cyberciti.biz/faq/linux-ripping-and-encoding-audio-files/

Thursday, May 14, 2015

skicka: Google drive command line tool

Install go and skicka

$ sudo pacman -S go
$ mkdir ~/go
$ export GOPATH=~/go
$ export PATH=$PATH:~/go/bin
$ go get github.com/google/skicka

Initialize the configuration and client id/secret key pairs from  https://console.developers.google.com/project
Read: https://github.com/google/skicka/blob/master/README.md


$ skicka init
2015/05/14 21:44:37 created configuration file /home/xatier/.skicka.config.
$ vim ~/.skicka.config



Oauth authentication for the first time
Generate ~/.skicka.metadata.cache and ~/.skicka.tokencache.json

$ skicka ls -l /
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?***********************************
Enter verification code: ******************************************
Updating metadata cache: 
[========================================================================================] 99.99 % 37s


Support commands:

$ skicka
usage: skicka [skicka options] [command options]

Supported commands are:
cat Print the contents of the given file
download Download a file or folder hierarchy from Drive to the local disk
df Display free space on Drive
du Report disk usage for a folder hierarchy on Drive
fsck Check consistency of files in Drive and local metadata cache
genkey Generate a new encryption key
init Create an initial skicka configuration file
ls List the contents of a folder on Google Drive
mkdir Create a new folder or folder hierarchy on Drive
rm Remove a file or folder on Google Drive
upload Upload a local file or directory hierarchy to Drive



Testing with my hinet 100/40 home use plan (roughly around 1.5MB/s):


$ skicka upload GG.mp4 /
Files: 23.83 MB / 23.83 MB 
[========================================================================================] 100.00 % 15s
2015/05/14 22:13:11 Preparation time 1s, sync time 15s
2015/05/14 22:13:11 Updated 1 Drive files, 0 local files
2015/05/14 22:13:11 23.83 MiB read from disk, 0 B written to disk
2015/05/14 22:13:11 23.83 MiB uploaded (1.58 MiB/s), 0 B downloaded (0 B/s)
2015/05/14 22:13:11 4.72 MiB peak memory used

$ skicka upload GG.mp4 /
Files: 23.83 MB / 23.83 MB 
[========================================================================================] 100.00 % 13s
2015/05/14 22:14:19 Preparation time 1s, sync time 13s
2015/05/14 22:14:19 Updated 1 Drive files, 0 local files
2015/05/14 22:14:19 23.83 MiB read from disk, 0 B written to disk
2015/05/14 22:14:19 23.83 MiB uploaded (1.72 MiB/s), 0 B downloaded (0 B/s)
2015/05/14 22:14:19 5.49 MiB peak memory used

$ skicka upload GG.mp4 /
Files: 23.83 MB / 23.83 MB 
[========================================================================================] 100.00 % 13s
2015/05/14 22:14:41 Preparation time 1s, sync time 13s
2015/05/14 22:14:41 Updated 1 Drive files, 0 local files
2015/05/14 22:14:41 23.83 MiB read from disk, 0 B written to disk
2015/05/14 22:14:41 23.83 MiB uploaded (1.71 MiB/s), 0 B downloaded (0 B/s)
2015/05/14 22:14:41 5.49 MiB peak memory used

$ skicka upload GG.mp4 /
Files: 23.83 MB / 23.83 MB 
[========================================================================================] 100.00 % 15s
2015/05/14 22:15:01 Preparation time 1s, sync time 15s
2015/05/14 22:15:01 Updated 1 Drive files, 0 local files
2015/05/14 22:15:01 23.83 MiB read from disk, 0 B written to disk
2015/05/14 22:15:01 23.83 MiB uploaded (1.51 MiB/s), 0 B downloaded (0 B/s)
2015/05/14 22:15:01 5.49 MiB peak memory used

Large file test:

$ skicka upload movie.mkv
Files: 8.39 GB / 8.39 GB 
[========================================================================================] 100.00 % 2h43m59s
2015/05/15 01:18:24 Preparation time 1s, sync time 2h 43m 59s
2015/05/15 01:18:24 Updated 1 Drive files, 0 local files
2015/05/15 01:18:24 8.39 GiB read from disk, 0 B written to disk
2015/05/15 01:18:24 8.39 GiB uploaded (893.93 kiB/s), 0 B downloaded (0 B/s)
2015/05/15 01:18:24 16.07 MiB peak memory used



Encryption

$ SKICKA_PASSPHRASE=gg skicka genkey
; Add the following lines to the [encryption] section
; of your ~/.skicka.config file.
salt=************************************
passphrase-hash=************************************
encrypted-key=************************************
encrypted-key-iv=************************************

$ SKICKA_PASSPHRASE=gg skicka upload -encrypt GG2.mp4 /
Files: 23.83 MB / 23.83 MB 
[========================================================================================] 100.00 % 25s
2015/05/14 21:59:16 Preparation time 1s, sync time 26s
2015/05/14 21:59:16 Updated 1 Drive files, 0 local files
2015/05/14 21:59:16 23.83 MiB read from disk, 0 B written to disk
2015/05/14 21:59:16 23.83 MiB uploaded (928.17 kiB/s), 0 B downloaded (0 B/s)
2015/05/14 21:59:16 4.24 MiB peak memory used


Tuesday, May 5, 2015

STL pretty print support in gdb

I program in C/C++ for a long time, it's really hateful to debug C++ programs with STL containers in gdb, but yeah, who doesn't use STL?

gdb always prints lots of useless stuffs from a container.

$ g++ -std=c++11 foo.cc -g
$ gdb -q ./a.out
Reading symbols from ./a.out...done.

(^q^) l
1 #include <vector>
2
3 int main (void) {
4    std::vector<int> v = {1, 2, 3, 4, 5};
5    return 0;
6 }

(^q^) b 5
Breakpoint 1 at 0x40082c: file foo.cc, line 5.

(^q^) r
Starting program: /tmp/a.out 

Breakpoint 1, main () at foo.cc:5
5    return 0;

(^q^) p
$2 = {
  <std::_Vector_base<int, std::allocator<int> >> = {
    _M_impl = {
      <std::allocator<int>> = {
        <__gnu_cxx::new_allocator<int>> = {<No data fields>}, <No data fields>}, 
      members of std::_Vector_base<int, std::allocator<int> >::_Vector_impl: 
      _M_start = 0x602010, 
      _M_finish = 0x602024, 
      _M_end_of_storage = 0x602024
    }
  }, <No data fields>}

(^q^) 



And we're able to use a very tricky way to print that out.


(^q^) p *(v._M_impl._M_start)@(v._M_impl._M_finish - v._M_impl._M_start)
$22 =   {[0] = 1,
  [1] = 2,
  [2] = 3,
  [3] = 4,
  [4] = 5}

(^q^) 

A container is a "container", we don't care the implementation details, what we care about is the data.

With the pretty printer, we're able to print a container out like this:

(^q^) p v
$1 = std::vector of length 5, capacity 5 = {
  [0] = 1,
  [1] = 2,
  [2] = 3,
  [3] = 4,
  [4] = 5
}


According to the gdb wiki, the gdb python pretty printer is supported since 7.0. https://sourceware.org/gdb/wiki/STLSupport

Download the latest python script from gcc.gnu.org
---
mkdir ~/.gdb
cd ~/.gdb
svn co svn://gcc.gnu.org/svn/gcc/trunk/libstdc++-v3/python
---

put the following stuffs appended to your ~/.gdbinit
---
python
import sys
sys.path.insert(0, '/home/xatier/.gdb/python')
import libstdcxx.v6
end
---

Note, we only need to import the lib, let __init__.py do the magic ;-)

Done!


If you really want to dig up the implementation details, use " p /r "


(^q^) p /r v
$1 = {
  <std::_Vector_base<int, std::allocator<int> >> = {
    _M_impl = {
      <std::allocator<int>> = {
        <__gnu_cxx::new_allocator<int>> = {<No data fields>}, <No data fields>}, 
      members of std::_Vector_base<int, std::allocator<int> >::_Vector_impl: 
      _M_start = 0x602010, 
      _M_finish = 0x602024, 
      _M_end_of_storage = 0x602024
    }
  }, <No data fields>}

(^q^)


Actually this python script will be installed and automatically loaded in Archlinux once you install gcc/gcc-multilib.


$ pacman -Ql gcc-multilib | grep libstd
gcc-multilib /usr/lib/libstdc++.a
gcc-multilib /usr/lib32/libstdc++.a
gcc-multilib /usr/share/gcc-4.9.2/python/libstdcxx/
gcc-multilib /usr/share/gcc-4.9.2/python/libstdcxx/__init__.py
gcc-multilib /usr/share/gcc-4.9.2/python/libstdcxx/v6/
gcc-multilib /usr/share/gcc-4.9.2/python/libstdcxx/v6/__init__.py
gcc-multilib /usr/share/gcc-4.9.2/python/libstdcxx/v6/printers.py
gcc-multilib /usr/share/gdb/auto-load/usr/lib/libstdc++.so.6.0.20-gdb.py


If you're using other distros like Debian, that should be here, but not auto-loaded by default :(

$ apt-file list gcc | grep libstd
gcc-snapshot: /usr/lib/gcc-snapshot/share/gcc-4.9.0/python/libstdcxx/__init__.py
gcc-snapshot: /usr/lib/gcc-snapshot/share/gcc-4.9.0/python/libstdcxx/v6/__init__.py
gcc-snapshot: /usr/lib/gcc-snapshot/share/gcc-4.9.0/python/libstdcxx/v6/printers.py


Happy debugging C++! (^q^)

Monday, May 4, 2015

Reverseing reverse engineered tools: reverse on the pyc of Easy-Card tool

A hacker called  Zhi-Wei Cai did some reverse engineering on Taipei city passport Easy Card credit querying system.


https://github.com/x43x61x69/Easy-Card


But he only released the .pyc file.

I tool a glance on the binary file and smiled, he didn't apply any code obfuscation on that, that is, it would be pretty easy for those who want to read the source code.

With the help of this tool: https://github.com/wibiti/uncompyle2

uncompyle2 easycard.pyc > easycard.py

You can decompile that yourself.




Basically the API takes 4 parameters: verify, cardID, begin, end.


verify = md5((seed * const) + salt)
where
    seed = date.month + date.day + date.hour
    salt = 'L0CalKing'
    const = 8544

cardID = base64( des3( data, key, iv, mode=DEC3.MODE_CBC) )
where
    data = 'your card ID', like '1234567889'
    key = 'EasyCardToKingay23456789
    iv = '01234567'
   
begin / end = time period


I'm really curious about how did he get these constants , but I don't want to dig up the original app. :)


Lesson: don't release .pyc file without code obfuscation if you really don't want any people try to dig up your code.


Code listing (partially omitted):

#!/usr/bin/env python2
# -*- encoding: utf8 -*-

# 2015.05.04 17:41:47 CST
import sys
import datetime
import hashlib
import urllib
import urllib2
import json
from Crypto.Cipher import DES3
import pytz
version = '0.3'
copyright = 'Copyright (C) 2015 Zhi-Wei Cai.'
key = 'EasyCardToKingay23456789'
iv = '01234567'
salt = 'L0CalKing'
const = 8544

def getID(data, isEncrypt, key, iv, encode):
    size = len(data)
    # '\x06' is the padding of DES3
    if size % 16 != 0:
        data += '\x06' * (16 - size % 16)
    des3 = DES3.new(key, DES3.MODE_CBC, iv)
    if isEncrypt:
        result = des3.encrypt(data).encode(encode).rstrip()
    else:
        result = des3.decrypt(data.decode(encode))
    return result



def getVerify(const, seed, salt):
    hash = hashlib.md5()
    hash.update(str(seed * const) + salt)
    return hash.hexdigest().upper()



def proc(data):
    e = getID(data, 1, key, iv, 'base64')
    cardID = urllib.quote_plus(e)
    date = datetime.datetime.now(pytz.timezone('Asia/Taipei'))
    seed = date.month + date.day + date.hour
    begin = '{:%Y-%m-%d}'.format(date - datetime.timedelta(days=30))
    end = '{:%Y-%m-%d}'.format(date)
    verify = getVerify(const, seed, salt)
    url = '<Easy Card API URL>'.format(verify, cardID, begin, end)
    req = urllib2.Request(url)
    response = urllib2.urlopen(req)
    content = response.read()
    dict = json.loads(content)

   # the rest part of the code is omitted


if __name__ == '__main__':
    print '\n悠遊卡餘額明細查詢 v{}'.format(version)
    print '{}\n'.format(copyright)
    if len(sys.argv) > 1:
        try:
            print '\n{:=^90}\n'.format('[ 查詢開始 ]')
            proc(str(sys.argv[1]))
        except ValueError as err:
            pass
    else:
        while 1:
            try:
                data = raw_input('請輸入卡片號碼:').replace(' ', '')
                if len(data):
                    print '\n{:=^90}\n'.format('[ 查詢開始 ]')
                    proc(data)
                else:
                    break
            except ValueError as err:
                pass


#+++ okay decompyling easycard.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify failed
# 2015.05.04 17:41:47 CST

Wednesday, April 8, 2015

More notes for shadowsocks

I wrote a note for the usage of shadowsocks few days ago: http://xatierlike.blogspot.tw/2015/03/note-for-shadowsocks.html

I spent some time to dig into the source code of the project and came up with this note, just a note for what I've found. :D


License

shadowsocks is under Apache 2.0.

autoban

There's a script called autoban.py under shadowsocks/utils .

According to the official wiki, that is used for banning brute force crackers.
https://github.com/shadowsocks/shadowsocks/wiki/Ban-Brute-Force-Crackers

Actually, autoban.py is implemented by iptables , this script looks at the log and find something like this and grab the remote remote IP out.

'2015-04-07 16:42:26 ERROR    can not parse header when handling connection from 61.157.96.193:27242'

if 'can not parse header when' in line:
    ip = line.split()[-1].split(':')[0
    ...
    cmd = 'iptables -A INPUT -s %s -j DROP' % ip
    print(cmd, file=sys.stderr)
    os.system(cmd)


Versions

shadowsocks uses a very strange way to determine it's running under python2 or python3.

if bytes == str

This is True in python2 but False in python3.

Also, here's a strange logic to check the version.

I will write if info[0] == 2 and info[1] < 6 rather than the author does.

def check_python():
    info = sys.version_info
    if info[0] == 2 and not info[1] >= 6:
        print('Python 2.6+ required')
        sys.exit(1)
    elif info[0] == 3 and not info[1] >= 3:
        print('Python 3.3+ required')
        sys.exit(1)
    elif info[0] not in [2, 3]:
        print('Python version not supported')
        sys.exit(1)
     
     
Argument parsing

The entry points of sslocal and ssserver commands are the main functions in local.py and server.py, respectively.

shell.py checks command line arguments and checks the configuration files, it's using getopt, I think that should should be rewrite with argparse .


Event loop and Relay:

Basically shadowsocks abstracts three kinds of polling system: epool, kqueue and system select, it will use them in order if available.

class EventLoop(object):
    def __init__(self):
        self._iterating = False
        if hasattr(select, 'epoll'):
            self._impl = EpollLoop()
            model = 'epoll'
        elif hasattr(select, 'kqueue'):
            self._impl = KqueueLoop()
            model = 'kqueue'
        elif hasattr(select, 'select'):
            self._impl = SelectLoop()
            model = 'select'
        else:
            raise Exception('can not find any available functions in select '
                            'package')


So basically shadowsocks has a local server connected to the SOCK5 proxy and send data to remote via TCP/UDP relays.

Both of TCP/UDP relays will encrypt the payload with specified algorithms.

Here's the diagram of the idea of shadowsocks.

browser <== SOCKS proxy ==> local <== TCP/UDP relays ==> remote => free world

browser <== plain text ==> local <= encrypted data => GFW <= encrypted data => remote => free world


Pretty similar to SSH tunnel, right?

browser <= socks proxy => ssh client <= tunnel => ssh server => free world

The feathers of SSH handshaking traffic is easily blocked by GFW, shadowsocks are just simple standard TCP/UDP traffic with unknown/encrypted payloads.


The following is from the comments of tcprelay.py and rdprelay.py .

TCP Relay

# for each opening port, we have a TCP Relay
# for each connection, we have a TCP Relay Handler to handle the connection
# for each handler, we have 2 sockets:
#    local:   connected to the client
#    remote:  connected to remote server

# as sslocal:
# stage 0 SOCKS hello received from local, send hello to local
# stage 1 addr received from local, query DNS for remote
# stage 2 UDP assoc
# stage 3 DNS resolved, connect to remote
# stage 4 still connecting, more data from local received
# stage 5 remote connected, piping local and remote

# as ssserver:
# stage 0 just jump to stage 1
# stage 1 addr received from local, query DNS for remote
# stage 3 DNS resolved, connect to remote
# stage 4 still connecting, more data from local received
# stage 5 remote connected, piping local and remote

UDP Relay

# HOW TO NAME THINGS
# ------------------
# `dest`    means destination server, which is from DST fields in the SOCKS5
#           request
# `local`   means local server of shadowsocks
# `remote`  means remote server of shadowsocks
# `client`  means UDP clients that connects to other servers
# `server`  means the UDP server that handles user requests


Misc

shadowsocks implements its own DNS query (asyncdns.py) and LRU caching  (lru_cache.py) system.


Reference:
http://vc2tea.com/whats-shadowsocks/
http://gpio.me/readcode-ShadowSocks.html

steal LINE stickers to telegram

1. pull all LINE stickers from you phone

adb pull /storage/sdcard0/Android/data/jp.naver.line.android/stickers/ .

2. find your purchased sticker pack (you can look at the "preview" file)

for example, the sticker pack of Puella Magi Madoka Magica is # 1101

Note, you also can find stickers sent by your friend.

3. convert your the files (from PNG) to the WebP format

I'm too lazy so I wrote a script to do the following stuffs:

mkdir madoka
cp ../stickers/1101/* madoka/
cd chocola/
rm *_key preview thumbnail *.tmp
for i in *; do cwebp $i -o $i.webp ; done
for i in `ls | grep -v webp`; do mv $i $i.png; done
cd ..

4. put them in your phone, done!



Test:

The first one is the file in PNG, the second one is in WebP.



Automatic script: https://gist.github.com/xatier/971e1abe16f3bcbc51d9

Reference
https://telegram.org/blog/stickers
https://developers.google.com/speed/webp/docs/using

Saturday, April 4, 2015

netowrk speed test between two linux boxes


This trick with nc and dd can be used as speed testing between two linux boxes.

Server:
nc -vvlnp 12345 > /dev/null

Client:
dd if=/dev/zero bs=1M count=1k | nc -vvn <server IP> 12345

Tuesday, March 31, 2015

GnuPG notes

Some notes for GnuPG

# usually we can replace <key ID> with <user ID>
# keygen
gpg --full-gen-key (choose DSA & Elgamal here)


# editing
gpg --edit-key <key ID>


# key listing
gpg -k [ <user ID> or <key ID> ]
gpg -K [ <user ID> or <key ID> ]

-k: --list-public-keys / --list-keys
-k: --list-secret-keys


# fingerprint
gpg --fingerprint [ <key ID> ]


# import and export (backup & restore)
gpg --import filename

gpg --export <key id>
gpg --export-secret-keys <key id>

# --armor (-a): ASCII text format
gpg -a --export <key id>
gpg -a --export-secret-subkeys <key id>

gpg --enarmor filename.gpg
gpg --dearmor filename.asc


# keyserver
gpg --keyserver pgp.mit.edu --send-keys <key ID>
gpg --keyserver pgp.mit.edu --recv-keys <key ID>
gpg --keyserver pgp.mit.edu --search-key <key ID>


# encryption and decryption
gpg -e filename
gpg -r <key ID> -e filename
gpg -o filename -d filename.gpg

-e: --encrypt
-d: --decrypt
-o: --output
-r: --recipient


# signature
(in place signature)
gpg --sign filename
gpg --clearsign filename
gpg --verify filename.gpg

(saperated key)
gpg --detach-sign filename
gpg -a --detach-sign filename
gpg --verify filename.sig filename


# sign-key
gpg --sign-key <user id>


Tuesday, March 24, 2015

OpenVPN with VPN Gate

VPN Gate is a project by University of Tsukuba, Japan.
http://www.vpngate.net/en/

Basically there're thousands of relay servers hosted by volunteers around the world.

As a Linux user, the easiest way to connect to VPN Gate servers is OpenVPN.

Just install openvpn from the official repositories.

sudo pacman -S openvpn

and randomly grab a configuration file like this: vpngate_vpn197292320.opengw.net_udp_1786.ovpn

http://www.vpngate.net/en/do_openvpn.aspx?fqdn=vpn197292320.opengw.net&ip=223.223.103.92&tcp=1620&udp=1786&sid=1427186718705&hid=1019730

Then use openvpn client to read the config file and connect to the free internet:

sudo openvpn vpngate_vpn197292320.opengw.net_udp_1786.ovpn

I wrote simple Perl script to get OpenVPN configs from VPN Gate

https://gist.github.com/xatier/8911e8737089e9eaa236

That will show you a list of available VPNs and save the config file for you.

demo:



Reference:
http://www.vpngate.net/en/
https://wiki.archlinux.org/index.php/OpenVPN



Note for shadowsocks

Shadowsocks is a popular open sourced tunneling tool in China.

https://github.com/shadowsocks/shadowsocks
https://www.archlinux.org/packages/community/any/shadowsocks/

(both server and client)
sudo pacman -S shadowsocks

Salsa20 & Chacha20 support
sudo pacman -S libsodium python2-numpy python2-salsa20

/etc/shadowsocks/config.json
{
"server":"remote-shadowsocks-server-ip-addr",
"server_port":8888,
"local_address":"127.0.0.1",
"local_port":1080,
"password":"your-passwd",
"timeout":300,
"method":"aes-256-cfb",
"fast_open":false,
"workers":1
}


server
sudo ssserver -c /etc/shadowsocks/config.json --user nobody

run as daemon
sudo ssserver -c /etc/shadowsocks/config.json --user nobody -d start
sudo ssserver -d stop

client
sslocal -c /etc/shadowsocks/config.json

run as daemon
sudo sslocal -c /etc/shadowsocks/config.json -d start
sudo sslocal -c /etc/shadowsocks/config.json -d stop


Chromium: use Proxy SwitchyOmega and connect to a local socks5 proxy
https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif

Android Client
https://play.google.com/store/apps/details?id=com.github.shadowsocks

QR code for Android client
sudo pacman -S python2-qrcode
echo -n "ss://"`echo -n aes-256-cfb:password@1.2.3.4:8388 | base64` | qr


References:

https://github.com/shadowsocks/shadowsocks/wiki
https://wiki.archlinux.org/index.php/Shadowsocks_%28%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87%29



Thursday, February 12, 2015

Convert video to gif with ffmpeg


ffmpeg -i input.mkv -vf scale=1024:-1 -t 20 -r 20 output.gif

-vf scale=1024:01 : output file width
-t 20 : set output to 20 seconds
-r 20 : set frame rate of 20 fps

Example:

ffmpeg -i \[Kamigami\]\ Puella\ Magi\ Madoka\ Magica\ -\ 03\ \[BD\ 1920×1080\ x264\ Hi10P\ FLAC\(Jap\,Eng\,Ita\)\ Sub\(GB\,Big5\,Jap\,Eng\)\].mkv -vf scale=1024:-1 -t 20 -r 30 output.gif


https://i.imgur.com/DR0YYH1.gif






Reference:

Tuesday, January 27, 2015

security related chrome extensions

I use the chrome browser (chromium on LInux boxes), here's a list of security related chrome extensions I'm using.

I use open source solutions as possible, feel free to install them on your browser ;-)

µBlock - ad block (GPLv3)
https://chrome.google.com/webstore/detail/%C2%B5block/cjpalhdlnbpafiamejdnhcphjbkeiagm
https://github.com/gorhill/uBlock/


Proxy Switchy Omega - proxy swither (GPLv3)
https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif
https://github.com/FelisCatus/SwitchyOmega


Edit This Cookie - cookie editor (GPLv3)
https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg
https://github.com/fcapano/Edit-This-Cookie


Https everywhere - use https as possible (mostly MIT)
https://chrome.google.com/webstore/detail/https-everywhere/gcbommkclmclpchllfjekcdonpmejbdp
https://github.com/EFForg/https-everywhere


Privacy Badger - block spying ads and invisible trackers (GPLv3)
https://chrome.google.com/webstore/detail/privacy-badger/pkehgijcmpdhfbdbbnkijodmdjhbjlgp
https://github.com/EFForg/privacybadgerchrome


Wappalyzer - web page analyzer (GPLv3)
https://chrome.google.com/webstore/detail/wappalyzer/gppongmhjkpfnbhagpmjfkannfbllamg
https://github.com/ElbertF/Wappalyzer



closed source extensions:
User-Agent Switcher - A User Agent manager
https://chrome.google.com/webstore/detail/user-agent-switcher/ffhkkpnppgnfaobgihpdblnhmmbodake


Hola Better Internet - browser VPN
https://chrome.google.com/webstore/detail/hola-better-internet/gkojfkhlekighikafcpjkiklfbnlmeio

Saturday, January 10, 2015

Watching letv anime

Ray asked me to try the anime streaming service of letv (樂視), which is a popular streaming service in China.

http://comic.letv.com/

By default, the player will block IPs out of China, we can bypass that easily with a very simple proxy trick.



As a chrome user, I use "Proxy SwitchyOmega" to set porxies inside the browser tab (instead of a system-wide proxy setting).

https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif


previous version: Proxy SwitchySharp
https://chrome.google.com/webstore/detail/proxy-switchysharp/dpplabbmogkhghncfbfdeeokoefdjegm


1. Add the proxy configuration file (pac script) to a new proxy profile

PAC URL: http://pac.uku.im/pac.pac



2. Add the entire domain to auto-switch mode rules




3. test the proxy with the following URL


http://uku.im/check


The proxy setting is working if the url returns true.


4. enjoy the ad anime













Reference:

https://github.com/zhuzhuor/Unblock-Youku/wiki/%E4%BB%A3%E7%90%86%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%AE%BE%E7%BD%AE%E7%A4%BA%E4%BE%8B