Bash Commands


My commonly used commands

This is a page for my personal reference. TLDR pages has a more comprehensive command collection.

Background settings

Take the local repo of this blog as an example.

Show current path.

$ pwd
/home/vin100/quickstart

Distinguish files from folders .

$ ls -l
total 28
drwxrwxr-x 2 vin100 vin100 4096 Jun 28 21:00 archetypes
-rw-rw-r-- 1 vin100 vin100 2734 Aug 28 17:30 config.toml
drwxrwxr-x 4 vin100 vin100 4096 Jun 29 00:47 content
drwxrwxr-x 3 vin100 vin100 4096 Aug 25 15:29 layouts
drwxrwxr-x 9 vin100 vin100 4096 Jun 29 00:47 public
drwxrwxr-x 4 vin100 vin100 4096 Aug 25 13:36 static
drwxrwxr-x 3 vin100 vin100 4096 Aug 27 18:06 themes

apg

Avanced password generator.

My preferred way: partial management.

  1. Partial password saved: apg -a 1 -n 1 -m 6 -x 8 -M SNCL > output.txt
  2. manages partial password_s_.
  3. (De)centralized local storage. (bare repo on USB devices)

Flags explanations: (in alphabetical order)

  • -a [0|1]: algorigthm
    • 0: can be “pronounced”
    • 1: “random” string
  • -m: minimum password length
  • -M: mode
    • c: should include capital letter
    • C: must include capital letter
    • l: should include lowercase letter
    • L: must include lowercase letter
    • n: should include number
    • N: must include number
    • s: should include special character
    • S: must include special character
  • -n: number of output passwords
  • -x: maximum password length

apt-get

CLI Package manager. Requires sudo privilege.

sudo apt-get … Function
update update local repo
upgrade upgrade package version
install install a package
remove remove a package
purge remove a package and erase its folder
autoremove automatically remove unnecessary dependencies
clean remove unused local cache files
-s simulate, print only, dry run

aptitude

I redirect interested readers to the post on my old blog to avoid duplicating efforts.

awk

I use it to extract column(s). I don’t know why double quotes " doesn’t work.

$ ls -dl * | awk '{print $9, $5}'
archetypes 4096
config.toml 2861
content 4096
layouts 4096
public 4096
static 4096
themes 4096

It can be used to extract Git remote URL from git remote -v. They stand for fetch and push URLs.

$ git remote -v | awk '{print $2}'
git@gitlab.com:VincentTam/vincenttam.gitlab.io.git
git@gitlab.com:VincentTam/vincenttam.gitlab.io.git

It’s a sequence of /PAT/ {ACTION}. At most one of these two can be omitted. I suggest man mawk for a concise user guide. Things are executed record-wise, and the record separator (RS) is, by default, the newline character. By setting this to the empty string, RS becomes an empty line.

/PAT/ can be BEGIN and END. The former is useful for string substitutions and printing table headers.

Some built-in variables:

  • NR: number of records (e.g. NR==1{next} 1 prints all lines except the first one)
  • NF: number of fields (e.g. $NF stands for the last field.)
  • RS: input record separator (default: newline ↵)
  • ORS: output record separator (default: white space ␣)
  • IFS: input field separator (default: white space ␣)
  • OFS: output field separator (default: white space ␣)
  • RSTART: regex match() start index (0 if no match)
  • RLENGTH: regex match() length (-1 if no match)

Some string functions:

  • match(str, regex): return character position for the leftmost longest match

  • substr(str, start[, len]): return a len-character-long substring of str. If len is omitted, then return the whole line suffix starting from start.

  • sub(pat, repl[, target]): substitute leftmost longest pat in target ($0 if omitted) with repl. Return the number of match. (either 0 or 1)

  • gsub(pat, repl[, target]): substitute pat's in target ($0if omitted) withrepl`. Return the number of match(es).

    In section 9.1.3 of the GNU Awk User’s Guide, the following incorrect example is shown. I didn’t know why that’s wrong.

    gsub(/xyz/, "pdq", substr($0, 5, 20))  # WRONG
    

    Even though some commercial variants of awk allow this, that’s not portable to usual *nix shells. As a result, that should be avoided.

    The reason that that’s wrong is that (g)sub attempts to assign value pdq to part of substr which matches xyz. But the output of substr() can’t be assigned.

  • split(str, arr[, fldsp[, sparr]]): split str according to pattern fldsp (FS if omitted) and store them into arr. Return length(arr).

    split("123.456.789", arr, ".", sparr)
    arr[1] = "123"
    arr[3] = "789"
    sparr[2] = "."
    
  • exec(cmd): execute the command represented by the variable cmd.

    The following command attempts to fetch a local copy of a remote image file referenced in index.md. The rationale for doing that is to avoid hot-linking. This command would fail for a line containing both https://example.com/another.extension and sample.png, but that’s good enough to replace repeated copy and paste in a very simple file.

    $ awk '/https/ && /png/ {cmd="curl -L -O "$NF;system(cmd)}' index.md
    

bash

GNU’s Bourne-again shell. Use this with -c [CMD] to execute a CMD.

Key SIGNAL effect
^C SIGINT interrupt current process in the foreground
^D NIL send EOF by default, terminate normally a bash utility
^Z SIGTSTP suspend current process (see also: fg and bg)

The caret notation ^★ means <C-★>. For example, ^D means <C-d>.

Some built-in commands:

  • pwd: print currect directory

  • cd [PATH]: change directory. If PATH is omitted, switch to ~.

  • history [n]: print last n command(s) with line numbers.

  • alias foo=[CMD]: set up foo as a shorthand of CMD, usually appear in ~/.bashrc

  • jobs: display a list of current shell process(es)

  • bg [jobspec] / fg [jobspec]: run suspended job in the background / foreground.

  • kill [jobspec]: send SIGTERM to jobspec to kill current job.

    jobspec meaning
    %+ current job, same as % and %%
    %- previous job
    %[n] job number n
    %foo a job named foo (the first word in cmd)
    %?foo a job matched foo

    In fact, bg and fg can be omitted. Simply calling the jobspec (say, %2) will run the job in fg. Appending & to the jobspec (say, %1 &) will run the job in bg.

    Usgae with process ID: kill [OPTION(S)] [PID]. PID can be consulted from ps aux. When referring to processes in jobs, prepend the job ID by %. (e.g. kill %3)

    • -TSTP: “polite” stop (SIGTSTP), suspend a job instead of killing it. (Thanks to Steve Burdine .)
    • -STOP: “hard” stop (SIGSTOP), use -TSTP instead if possible
    • -CONT: continue job (SIGCONT)

    Some thoughts on : To suspend a running job (in either bg or fg, of whatever sign: +,-,  , through ^Z or kill -TSTP), it’s automatically brought to fg. As a result,

    • newly suspended job → %+
    • %+%-

    Some thoughts on %foo and %?foo: if foo matches multiple jobs, bash will throw an error. However, Zsh won’t: the most recent matching job will be run.

  • read [VAR(S)]: read STDIN and set it to shell VAR(S). This avoids exposing VAR(S) value(s) to bash history.

    • -d [DELIM]: set delimiter to DELIM. Read input until DELIM. Note that -d'' is the same as -d $'\0'.

    • -r: prevent backslash \ from escaping characters. (e.g. read \n as backslash \ concatenated with alphabet n instead of “line feed”.)

    • -p: prompt message (e.g. “Enter your input: “).

    • -s: secret input. Useful for password input.

    • -a [VAR_NAME]: read input, and split it into an array (of fields) named VAR_NAME according to IFS (input field separator).

      $ (IFS=','; read -a myarr <<< 'a,b,c'; for i in ${myarr[@]}; do echo $i; done)
      a
      b
      c
      
  • which [CMD]: find out the absolute path of CMD

  • for [var] in [value]; do [cmd(s)]; done: loop over items in a value

    Here’s a simple example using the .. range operator.

    $ for l in {A..b}; do printf '%s ' $l; done; echo
    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [  ] ^ _ ` a b
    

    Change white space ␣ to comma ,.

    $ for l in {A..b}; do
    > printf '%s' $l
    > [[ $l == 'b' ]] && printf '\n' || printf ','
    > done
    A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,[,,],^,_,`,a,b
    

    It’s possible to reverse the start and the end of the range.

    $ for l in {z..s}; do printf '%s ' $l; done; echo
    z y x w v u t s
    

    Another example is to list non-hidden files in the current directory (using $(ls)).

    $ for f in $(ls); do file $f; done
    LICENSE: Unicode text, UTF-8 text, with CRLF line terminators
    archetypes/: directory
    config.toml: ASCII text, with CRLF line terminators
    content/: directory
    data/: directory
    layouts/: directory
    public/: directory
    resources/: directory
    static/: directory
    staticman.yml: ASCII text, with CRLF line terminators
    themes/: directory
    

    ⚠️ The above for loop isn’t a correct way to loop over files, since file names might contain spaces.

    $ mkdir tempdir
    $ cd tempdir
    $ touch 'file 1.txt'
    $ touch file2.txt
    $ ls
    'file 1.txt'   file2.txt
    $ file *
    file 1.txt: empty
    file2.txt:  empty
    $ for f in $(ls); do file $f; done
    file: cannot open `file' (No such file or directory)
    1.txt: cannot open `1.txt' (No such file or directory)
    file2.txt: empty
    

    For a correct treatment, we have to

    1. use find to get a list of matching file names separated with the null character $'\0'.
    2. pipe it to a while loop (see next bullet point), because we are using
    3. the command read.
  • while [cmd] [var]; do [other_cmd(s)]; done: loop over command output from cmd. Often used with read to get loop variable values from file.

    • IFS=: input field seperator (not to be confused with “record separator” RS)
    while read url; do echo "$url"; done
    

    source of the above commandsf

    Here’s some examples explaning how IFS= can be used together with read -d in a while loop. -d is for RS. The examples that I have found online usually set IFS= and read -d to be the same, making them a bit difficult to understand for beginners. I’ve put some simple examples to illustrate their difference.

    [TL;DR]
    Before observing the effects of these three together, let’s see how the first two work.

    The for loop returns 1,2,3,4,5,6,7,8,9,10 (with trailing EOL), which is piped to a subshell (…) that

    1. set IFS to comma ,
    2. read first two fields as var1 and var2, and the rest as rest
    3. display these three shell variables’ name and value.
    4. ensure that these variables aren’t accessible outside (…).
    (for i in {1..10}; do
    > printf '%d' $i
    > (( $i == 10 )) && printf '\n' || printf ','
    > done) | \
    > (IFS=,;read var1 var2 rest;echo var1=$var1;echo var2=$var2;echo rest=$rest)
    var1=1
    var2=2
    rest=3 4 5 6 7 8 9 10
    

    Here read reads one record (often separated by newline ↵. In this case, it’s the entire output from the for loop.), and it stores the first two fields into $var1 and $var2, while storing the rest into $rest.

    If we add the option -d , to read, then the value for $var2 and $rest will be empty. This is because read only reads from the start till the first comma ,.

    (for i in {1..10}; do
    > printf '%d' $i
    > (( $i == 10 )) && printf '\n' || printf ','
    > done) | \
    > (IFS=,;read -d , var1 var2 rest;echo var1=$var1;echo var2=$var2;echo rest=$rest)
    var1=1
    var2=
    rest=
    

    Now we can try the three keywords IFS=, read -d and while loop together, with a simple string abc,def:ghi,jkl.

    $ echo 'abc,def:ghi,jkl' | \
    > while IFS=: read -d , var1 var2; do
    > echo var1=$var1 echo var2=$var2
    > done
    var1=abc echo var2=
    var1=def echo var2=ghi
    $ echo var1=$var1; echo $?
    var1=
    0
    

    Explanations:

    1. The command for while is read -d , var1 var2. While this command can be run without error (i.e. echo $? gives 0), this loop continues. Here’s a simple command to help understanding this.

      $ (echo 'abc' | read -d , var1; echo exit status=$?; echo var1=$var1)
      exit status=1
      var1=
      

      read -d is expecting a comma , in the string abc\n to stop reading, but abc\n doesn’t contain comma ,. That causes the error.

    2. The first iteration (i.e. first execution of read -d , var1 var2) gives abc, which doesn’t contain the IFS (i.e. colon :). As a result, the second field $var2 is empty.

    3. In the second iteration, the substring def:ghi is split into two fields due to the presence of the colon : in the middle. As a result, $var1 and $var2 are def and ghi respectively.

    4. In the remaining substring jkl\n, there’s no presence of the delimiter -d (i.e. comma ,). As a result, the read -d , var1 var2 would give a nonzero exit status, which ends this while loop. Here’s a closer look into the exit status of this while loop.

      $ echo 'abc,def:ghi,jkl' | \
      > while IFS=: read -d , var1 var2; do
      > echo "exit status=$?";
      > echo "var1=$var1";
      > echo "var2=$var2";
      > done
      exit status=0
      var1=abc
      var2=
      exit status=0
      var1=def
      var2=ghi
      $ echo $?
      0
      
    5. The IFS=: is put behind while so as to limit the scope of IFS to the while loop (both loop body and loop condition).

    6. After this while loop, I ran echo var1=$var1; echo $? to observe that

      • the scope of $var1 in the loop is limited to the loop itself
      • running echo $nonexistentvar won’t give a nonzero exit status.

    In find’s command for listing files, we have IFS= and read -d ''. The later ensures that in each iteration, a string is read. The former is necessary because the default IFS=$' \t\n' (obtained from set | grep ^IFS) would break any file name containing a white space ␣ into multiple fields, which is not something we want to see.

    See also:

  • command [-Vv] [CMD] [ARG(S)]

    • -v: print command absolute path or alias
    • -V: explain command type of CMD in a complete sentence
    • no flags: execute commands with colors stripped off.

Some shell evaluations:

  • $(( 5%3 )): do basic arithmetic
  • $(…)/`…`: command evaluation
  • see test for binary operators for comparison

Some shell strings manipulations:

Sotapme’s answer on Stack Overflow refers to an old Linux journal article that explains clearly the shell parameter expansion.

Four trimming combinations: ${variable#pattern}, …

from \ match shortest longest
start # ##
end % %%

Example: file (extension) name/path extraction

$ foo=/tmp/my.dir/filename.tar.gz
$ path=${foo%/*}
$ echo $path
/tmp/my.dir
$ file=${foo##*/}
$ echo $file
filename.tar.gz
$ base=${file%%.*}
$ echo $base
filename
$ ext=${file#*.}
$ echo $ext
tar.gz

Boolean operators:

  • !: NOT

    if [ ! -d $DIR ]
    then
    
  • &&: AND

    Can be used for chaining commands in list constructs. cmd2 will be executed only if cmd1 returns true (zero). The command terminates if cmd1 returns false (non-zero).

    cmd1 && cmd2
    
  • ||: OR

    cmd2 will be executed only if cmd1 returns false (nonzero). The command terminates if cmd1 returns true (zero).

    cmd1 || cmd2
    

&& and || can be chained together. The following two commands are equivalent.

cmd1 && cmd2 || cmd3
(cmd1 && cmd2) || cmd3

In the first command, even though cmd1 returns false, the command won’t terminate. Nonetheless, such chaining is not associative.

Redirection:

2>&1: redirect STDERR (standard error) to STDOUT (standard output).

$ rm non_existent.txt > err_msg.txt 2>&1
$ cat err_msg.txt
rm: cannot remove 'non_existent.txt': No such file or directory

This Stack Overflow answer gives nice explanation about the importance of the order of > err_msg.txt and 2>&1. Imagine the redirection operator > as a one-way tubes connecting the (output) streams. The tubes have to be set up before water (i.e. command’s output) flows. The first > err_msg.txt reads “set up the one-way tube STDOUT → err_msg.txt”; the second out reads “redirect STDERR (stream 2) to STDOUT (stream 1)”.

Input text into command: here string <<<, here docuemnt << and input redirection <.

  • input redirection: cmd < filename

    Run cmd on the file content of filename.

    Most commands (e.g. awk, cat, sort, etc) accept filename as an argument. That is, both cat < filename and cat filename give the same result. tr is the only easy exception that I know.

    ChatGPT has found another example: read. This is evident since the

    I’ve spent one night to construct this example.

    $ find -maxdepth 1 -type f -print > files.txt
    $ echo $'\t' >> files.txt
    $ od -A d -t x1 -c files.txt
    0000000  2e  2f  2e  67  69  74  69  67  6e  6f  72  65  0a  2e  2f  2e
              .   /   .   g   i   t   i   g   n   o   r   e  \n   .   /   .
    0000016  67  69  74  6c  61  62  2d  63  69  2e  79  6d  6c  0a  2e  2f
              g   i   t   l   a   b   -   c   i   .   y   m   l  \n   .   /
    0000032  2e  67  69  74  6d  6f  64  75  6c  65  73  0a  2e  2f  2e  68
              .   g   i   t   m   o   d   u   l   e   s  \n   .   /   .   h
    0000048  75  67  6f  5f  62  75  69  6c  64  2e  6c  6f  63  6b  0a  2e
              u   g   o   _   b   u   i   l   d   .   l   o   c   k  \n   .
    0000064  2f  63  6f  6e  66  69  67  2e  74  6f  6d  6c  0a  2e  2f  66
              /   c   o   n   f   i   g   .   t   o   m   l  \n   .   /   f
    0000080  69  6c  65  73  2e  74  78  74  0a  2e  2f  4c  49  43  45  4e
              i   l   e   s   .   t   x   t  \n   .   /   L   I   C   E   N
    0000096  53  45  0a  2e  2f  73  74  61  74  69  63  6d  61  6e  2e  79
              S   E  \n   .   /   s   t   a   t   i   c   m   a   n   .   y
    0000112  6d  6c  0a  09  0a
              m   l  \n  \t  \n
    0000117
    $ IFS=$'\n' read -r -d $'\t' -a test < files.txt
    $ declare -p test
    declare -a test=([0]="./.gitignore" [1]="./.gitlab-ci.yml" [2]="./.gitmodules" [
    3]="./.hugo_build.lock" [4]="./config.toml" [5]="./files.txt" [6]="./LICENSE" [7
    ]="./staticman.yml")
    
    1. Make a list of newline ↵ -delimited files in the current directory, but not in any subdirectory, and store it into the file files.txt. Note that the file itself is also on the list.
    2. Append a line consisting only of tab ↹ so that read -d $'\t' can read the entire list.
    3. Use od to verify that the file is newline ↵ -delimited and that it terminates with \t\n.
    4. Read this list, i.e. the file content of files.txt
      • using newline ↵ as a local IFS
      • -r: prevent backslash escape
      • -d $'\t': read the list (containing newlines ↵) until the tab ↹ (excluded)
      • -a test: store the read results into an array variable named test.
    5. Display the variable test as a key-value pair.
  • here document: cmd << EOF

    Allow multi-lined input whose end is marked with a line containing only of EOF (end of file) for cmd.

    $ cat << EOF | wc -l
    > line 1
    > line 2
    > EOF
    2
    $ cat << EOF | od -bc
    > line 1
    > line 2
    > EOF
    0000000 154 151 156 145 040 061 012 154 151 156 145 040 062 012
              l   i   n   e       1  \n   l   i   n   e       2  \n
    0000016
    
    1. Pass the two-lined input

      line 1
      line 2
      

      like a file content to the command cat.

    2. cat transfer STDIN to STDOUT, which is piped to wc.

    3. wc -l counts the number of lines in the piped content, and display the result in STDOUT.

    4. Repeat the steps for od, which displays what’s really piped.

  • here string <<<: cmd <<< string

    Pass string to cmd’s STDIN. It’s often used for passing single-lined string to STDIN. Here’s an example using here string to pass similar contents to the same set of commands. I’ve included the trailing newline \n in the here string at will so as to emphasize the fact that a newline \n is automatically attached to the end of a here string.

    $ cat <<< $'line 1\nline 2\n' | wc -l
    3
    $ cat <<< $'line 1\nline 2\n' | od -bc
    0000000 154 151 156 145 040 061 012 154 151 156 145 040 062 012 012
              l   i   n   e       1  \n   l   i   n   e       2  \n  \n
    0000017
    

bc

GNU’s basic calculator

Flags explanations: (in alphabetical order)

  • -l: load math library

    syntax function
    a arctan\arctan
    s sin\sin
    c cos\cos
    l log\log
  • -q: quiet, don’t display GNU’s standard welcome message

blkid

Display hard disk UUID.

cat

Catenate (combine) files and display them (in STDOUT).

  • -n: display line number

This can be used with the “null device” to show string without leaving a trace in the bash history.

$ cat > /dev/null
foo bar …
$

See also: tac

code

Open Visual Studio Code. The interest of invoking this IDE from command line interface is to allow it to use the parameters set in ~/.bashrc, say the SSH secret key added to the SSH agent, so that it won’t prompt for your SSH secret key’s passphrase every time before synchronisation with the remote repo.

  • code /path/to/proj/dir/: open the project directory.
  • code /path/to/some/file: open a particular file.
  • code --list-extensions --show-versions: list all your installed extensions with their version number, in the format author.extension@version.

My VSCode extension list has been uploaded to GitLab.

convert

General usage: convert [input-option] [input-file] [output-option] [output-file].

Supported formats: JPG, PNG, GIF, SVG, etc

GUI softwares (e.g. GIMP) enable preview of processed images, which is necessary in image sharpening. Therefore, I only list a few options below.

format conversion

$ convert foo.ppm -quality [num] bar.jpg

[num] takes value from 1 to 100. The higher the value, the better the quality and the larger the file.

image manipulation

This is good for batch processing.

Options:

  • -crop WxH+X+Y: crop image of W px ×\times H px from (X,Y)
  • -rotate [DEG]: rotate input-file by DEG clockwisely.
  • -resize [DIM1] [DIM2]: resize image (if DIM2 is missing, largest dimension will be taken)

curl

Like wget, grab stuff from the Internet. Support all common protocols. (HTTP(S), (S)FTP, etc)

Basic usage:

$ curl [FLAG(S)] [URL]

It writes to STDOUT. An -o flag can be passed to specify output file.

$ curl https://api.staticman.net/v2/encrypt/testing -o foo.txt

File downloading

To keep the file name, use -O.

$ curl -O https://vincenttam.gitlab.io/page/simple-document-templates/cjk-article.pdf

To download multiple files, use -O in series.

$ curl -O [URL1] -O [URL2]

To continue an interrupted download, use -C like the following line.

$ curl -C - -O https://vincenttam.gitlab.io/page/simple-document-templates/cjk-article.pdf

This saves time and bandwidth in case of network interruption, and this can be applied to large files like ISO files for GNU/Linux distros.

URL redirects

If a page is “moved (permanently)” or the URL is the shortlink of another page, use -L to enable URL redirects.

$ curl https://lstu.fr/3s       # no output
$ curl -L https://lstu.fr/3s    # many lines of output

GET request

$ curl -i -X GET https://api.staticman.net/v2/encrypt/1q2w3e4r

See also: wget

Shorten GitHub URL

⚠️ GitHub no longer accepts new URL shortening requests. As a result, the contents in this subsection is obsolete.

The online version of GitHub’s URL shortener doesn’t allow user-supplied short name.

Sucessful outcome

$ curl -i https://git.io -F "url=https://vincenttam.github.io/beautiful-jekyll" \
-F "code=bjsm18"; echo
HTTP/1.1 201 Created
Server: Cowboy
Connection: keep-alive
Date: Wed, 19 Dec 2018 21:38:46 GMT
Status: 201 Created
Content-Type: text/html;charset=utf-8
Location: https://git.io/bjsm18
Content-Length: 45
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Runtime: 0.034660
X-Node: 1ba43bb4-4d53-46d8-85c0-3c882f10dc56
X-Revision: 392798d237fc1aa5cd55cada10d2945773e741a8
Strict-Transport-Security: max-age=31536000; includeSubDomains
Via: 1.1 vegur

Failed outcome

$ curl -i https://git.io -F "url=https://vincenttam.github.io/beautiful-jekyll"
-F "code=vbjz"; echo
HTTP/1.1 422 Unprocessable Entity
Server: Cowboy
Connection: keep-alive
Date: Wed, 19 Dec 2018 21:38:13 GMT
Status: 422 Unprocessable Entity
Content-Type: text/html;charset=utf-8
Content-Length: 114
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Runtime: 0.011171
X-Node: db5e77f9-b4e8-41b3-bb6c-a85a5c4493c1
X-Revision: 392798d237fc1aa5cd55cada10d2945773e741a8
Strict-Transport-Security: max-age=31536000; includeSubDomains
Via: 1.1 vegur

"https://vincenttam.github.io/beautiful-jekyll" was supposed to be shortened to
"vbjz", but "existing" already is!

References:

  1. HowtoForge
  2. curl tutorial
  3. curl POST examples

cut

Cut and display the useful part of each line of message.

  • -c[LIST]: select character according to [LIST]
  • -d[DELIM]: set delimiter (default is white space)
  • -f[LIST]: select fields according to [LIST]
  • -z: \0-delimited. Useful for handling strings containing newlines and white spaces, e.g. output of find -print0.

I discovered this new way of extracting columns with custom delimiters.

$ git --version
git version 2.17.1
$ git --version -v | cut -d' ' -f1  # returns git
$ git --version -v | cut -d' ' -f2  # returns version
$ git --version -v | cut -d' ' -f3  # returns 2.17.1

However, this can’t be used for extracting the Git remote URL from git remote due to tabs \t separating the first two columns.

$ git remote -v | head -1 | od -An -c
    o   r   i   g   i   n  \t   g   i   t   @   g   i   t   l   a
    b   .   c   o   m   :   V   i   n   c   e   n   t   T   a   m
    /   v   i   n   c   e   n   t   t   a   m   .   g   i   t   l
    a   b   .   i   o   .   g   i   t       (   f   e   t   c   h
    )  \n

In this case, awk has to be used.

date

Display or adjust system date. Default to current time (zone).

Flags explanations: (in alphabetical order)

  • -d [STR]: convert STR to +%c

    $ date -d '@1536336779'
    Friday, September 07, 2018 PM06:12:59 CEST
    
  • -f [FILE]: read from FILE line by line and convert to +%c

  • -I[d|h|m|s|n]: ISO 8601 (default to d)

    [dhmsn] output
    n 2018-09-07T18:12:59,822423484+02:00
    s 2018-09-07T18:12:59+02:00
    m 2018-09-07T18:12+02:00
    h 2018-09-07T18+02:00
    d 2018-09-07
  • -R: for sending emails 📧

    $ date -R -d "2018-09-07 18:12:59"
    Fri, 07 Sep 2018 18:12:59 +0200
    
  • --rfc-3339=[dsn]: similar to -I with little differences (T, ,.)

    [dsn] output
    n 2018-09-07 18:12:59.822423484+02:00
    s 2018-09-07 18:12:59+02:00
    d 2018-09-07

Formats:

$ echo $LC_TIME
en_HK.UTF-8
$ date +%A
Friday
+% e.g. Remarks
z +0200
:z +02:00
::z +02:00:00
:::z +02 shortest numeric time zone
Z CEST
c Friday, September 07, 2018 PM06:12:59 CEST locale’s date and time
Y 2018
C 20
y 18
q 3 quarter
m 09
B September
b Sep same as %h
U 35 Week no. (week starts from Sunday, 00–53)
V 36 ISO week no. (week starts from Monday, 01–53)
W 35 Week no. (week starts from Monday, 00–53)
j 250 jour in French (001–366)
F 2018-09-07 Full date: %Y-%m-%d
x Friday, September 07, 2018 locale’s date
w 6
A Friday
a Fri
d 07
e 7 %_d
p PM blank if unknown
P pm idem
r PM06:12:59 CEST locale’s 12-hour time
T 18:12:59
X 06:12:29 CEST locale’s time
R 18:12
H 18
k 18 %_H
I 06
l 6 %_I
M 12
s 1536336779 seconds elapsed since 01/01/1970 00:00:00 UTC
S 59
n Enter ↵
t Tab ↹

 

Optional flags in between % and [char] meaning
- no padding
_ pad with
0 pad with 0
^ try uppercase

 

Acronym meaning
LC locale
CEST central European standard time

df

Disk free. Return amount of used and available disk space.

If a file is specified in the argument, df will return the row which represents the file system containing the file.

$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev             3982080        0   3982080   0% /dev
tmpfs             802728     1304    801424   1% /run
/dev/sda7       29396988 10706500  17174152  39% /
tmpfs            4013620    21500   3992120   1% /dev/shm
tmpfs               5120        4      5116   1% /run/lock
tmpfs            4013620        0   4013620   0% /sys/fs/cgroup
/dev/sda6         463826   151423    283936  35% /boot
tmpfs             802724       12    802712   1% /run/user/1000
  • -B[SIZE]: set unit to SIZE. GB is the SI counterpart of G.

    $ df -BGB
    Filesystem     1GB-blocks  Used Available Use% Mounted on
    udev                  5GB   0GB       5GB   0% /dev
    tmpfs                 1GB   1GB       1GB   1% /run
    /dev/sda7            31GB  11GB      18GB  39% /
    tmpfs                 5GB   1GB       5GB   1% /dev/shm
    tmpfs                 1GB   1GB       1GB   1% /run/lock
    tmpfs                 5GB   0GB       5GB   0% /sys/fs/cgroup
    /dev/sda6             1GB   1GB       1GB  35% /boot
    tmpfs                 1GB   1GB       1GB   1% /run/user/1000
    

    This flag doesn’t give accurate results due to rounding errors. Use this with care.

  • -h: human readable sizes

    $ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            3.8G     0  3.8G   0% /dev
    tmpfs           784M  1.3M  783M   1% /run
    /dev/sda7        29G   11G   17G  39% /
    tmpfs           3.9G   22M  3.9G   1% /dev/shm
    tmpfs           5.0M  4.0K  5.0M   1% /run/lock
    tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
    /dev/sda6       453M  148M  278M  35% /boot
    tmpfs           784M   12K  784M   1% /run/user/1000
    
  • -H: SI counterpart of -h

    $ df -H
    Filesystem      Size  Used Avail Use% Mounted on
    udev            4.1G     0  4.1G   0% /dev
    tmpfs           822M  1.4M  821M   1% /run
    /dev/sda7        31G   11G   18G  39% /
    tmpfs           4.2G   23M  4.1G   1% /dev/shm
    tmpfs           5.3M  4.1k  5.3M   1% /run/lock
    tmpfs           4.2G     0  4.2G   0% /sys/fs/cgroup
    /dev/sda6       475M  156M  291M  35% /boot
    tmpfs           822M   13k  822M   1% /run/user/1000
    
  • -t: file system type

    $ df -t ext4
    Filesystem     1K-blocks     Used Available Use% Mounted on
    /dev/sda7       29396988 10706516  17174136  39% /
    /dev/sda6         463826   151423    283936  35% /boot
    
  • --total: produce a sum of disk spaces at the bottom row and complete that row.

  • -x: opposite of -t

See also: du

diff

🆚 Display the difference between two text files.

Basic usage: diff [FILE1] [FILE2]

More useful form: show diff hunk. diff -u [FILE1] [FILE2]

dpkg

Dealing with packages. I only know the listing functionalities.

Flags explanations: (in alphabetical order)

  • -l [PKG(S)]: display package name, version, architecture and description

  • -L [PKG]: show installed files’ absolute path

    $ dpkg -L g++
    /.
    /usr
    /usr/bin
    /usr/share
    /usr/share/doc
    /usr/share/man
    /usr/share/man/man1
    /usr/bin/g++
    /usr/bin/x86_64-linux-gnu-g++
    /usr/share/doc/g++
    /usr/share/man/man1/g++.1.gz
    /usr/share/man/man1/x86_64-linux-gnu-g++.1.gz
    

du

Display disk usage in KB. Only folders are shown by default.

General usage: du [FLAG(S)] [DIR(S)] …

If the argument [DIR(S)] is omitted, the size of every (sub)folder (including hidden ones) will be displayed.

$ du

Display the size of each subfolder in the folder layouts.

$ du layouts
12 layouts/partials
20 layouts/

Flags explanations: (in alphabetical order)

  • -a: also include all files

  • -c: include grand total at the bottom

    $ du -c layouts static
    12  layouts/partials
    20  layouts
    8   static/css
    196 static/img
    212 static
    232 total
    
  • -d [n]: max depth

    Process contents of [DIR] up to at most n level(s) deep.

    I found the concept of “level” hard to understand when I ran this in . because the output was cluttered with folders holding binary objects.

    Let me illustrate this idea with the following example.

    $ du -d 2 content
    8       content/post/2018-08-29-csb-theorem
    16      content/post/2018-07-17-rodeo
    248     content/post/2018-07-07-upgraded-to-linux-mint-19
    500     content/post/2018-08-18-ubuntu-18-04-installation-on-fujitsu-ah557
    76      content/post/2018-07-26-web-image-optimisation-with-gimp
    920     content/post/2018-07-04-fujitsu-lh532-fan-cleaning
    388     content/post/2018-08-23-brighten-image-with-gimp
    1412    content/post/2018-07-23-fujitsu-lh532-keyboard-cleaning
    3624    content/post
    12      content/page/bash-commands
    20      content/page┆┆
    3652    content┆┆   ┆┆
            ├──────┤├───┤├────────────────────────────────────────────────────┤
              Lv 0  Lv 1                         Lv 2
    
  • -h: human readable

    $ du -h layouts
    12K layouts/partials
    20K layouts/
    
  • --exclude=[FILE]

  • -s: summary, display only [DIR(S)]’s size. (equivalent to -d 0) This can be used to measure the size of a folder .

    $ du -s static content
    212     static
    3656    content
    
  • --time: also display time in the middle of each row

    $ du --time static
    8       2018-08-28 16:58        static/css
    196     2018-07-26 15:47        static/img
    212     2018-08-28 16:58        static
    

See also: df

echo

Display all arguments (in STDOUT).

$ echo foo bar
foo bar
  • -e: enable regular expressions

    • \a: shell beep (disabled in Ubuntu by default)

      $ echo -e "\a"
      
      $
      
    • \n: newline ↵

    • \t: tab ↹

  • -n: don’t output ↵ at the end

    $ echo -en "\a"
    $
    

It’s possible to redirect STDOUT to a file, using either > (replace original file) or >> (append to existing file). See the end of Bash for more redirections.

$ echo something > new_file.txt
$ cat new_file.txt
something

Remarks: It’s impossible to append the NULL character \0 to a string terminated with newline ↵.

ffmpeg

A powerful video-editing tool.

Video trimming

One often wants to trim a video from a given time mm:ss to another end time mm2:ss2.

ffmpeg -ss mm:ss -to mm2:ss2 -i video.mp4 -codec copy output.mp4

ChatGPT has returned a more detailed command with different treatments to video channel -c:v and audo channel -c:a that looks like (but not actually) this.

ffmpeg -i input.mp4 -c:v [lossless_codec] -c:a copy output

However, those lines might produce strange outputs for MP4s.

Here’s a command that works for MP4s. I’ve found this command thanks to this Super User answer about ffmpeg.

ffmpeg -i gif10s.mp4 -c:v libx264 -crf 0 -preset veryslow -c:a libmp3lame \
-b:a 320k -ss 0.0 -to 3.0 output03.mp4
  • -i gif10s.mp4: use gif10s.mp4 as input file
  • -c:v libx264: use X264, the free h.264 encoder, for video channel.
  • -crf 0: CRF (constant rate factor) controls quality over file size. The higher the number, the worse the quality. A CRF of 18 is considered to be visually lossless.
  • -preset [veryslow|veryfast|ultrafast]: controls the encode speed.
  • -c:a libmp3lame: use LAME MP3 encoder for audio channel.
  • -b:a 320k: set ABR (average bit rate) to 320k.
  • -ss 0.0: controls the starting time
  • -to 3.0: controls the end time. Defaults to the end of the input file if omitted.

File conversion

I’ve only tried converting GIF to MP4.

ffmpeg -i file.gif -movflags +faststart -pix_fmt yuv420p \
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" file.mp4
  • -movflags +faststart: watch video while downloading.
  • -pix_fmt yuv420p the default yuv444p doesn’t please Instagram.
  • -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2": ensure frame dimensions are even instead of odd.

For more info, you may consult Sidney Liebrand’s blog post about ffmpeg conversions.

File compression

ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4

For Discord, use libx264 and a -crf value around 18 and 24.

The lower the CRF value, the higher the output quality.

Reference: a Unix.SE answer about ffmpeg output size reduction

ffprobe

Show details of a video. Without arguments, it outputs to STDERR. For a more accessible output, the answers for this ffmpeg question on Stack Overflow suggest -show_format -v quiet.

$ ffprobe BlueStacks\ App\ Player\ 2023-04-04\ 18-46-42.mp4 -show_format \
> -v quiet
[FORMAT]
filename=BlueStacks App Player 2023-04-04 18-46-42.mp4
nb_streams=2
nb_programs=0
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime / MOV
start_time=0.000000
duration=176.382100
size=130774616
bit_rate=5931423
probe_score=100
TAG:major_brand=mp42
TAG:minor_version=0
TAG:compatible_brands=mp41isom
TAG:creation_time=2023-04-04T16:49:38.000000Z
TAG:artist=Microsoft Game DVR
TAG:title=BlueStacks App Player
[/FORMAT]

Piping it to sed -n s/duration=//p gives the desired output.

$ ffprobe BlueStacks\ App\ Player\ 2023-04-04\ 18-46-42.mp4 -show_format \
> -v quiet | sed -n 's/duration=//p'
176.382100

find

Find files under PATH, -print and/or -execute command(s). PATH defaults to current path ..

General usage:

  • print / -print0: Display files: find [PATH] [FLAG(S)]

    • You may add -print at the end. This won’t change the display, but useful in conjunction with -exec.
    • You may use -print0 so that each output is delimited by null character \0 instead of newline ↵.

    In Ubuntu’s default location of Sublime Text 3 user config files, compare the output of -print

    $ find ~/.config/sublime-text-3/ -maxdepth 1 -print | od -c
    0000000   /   h   o   m   e   /   v   i   n   1   0   0   /   .   c   o
    0000020   n   f   i   g   /   s   u   b   l   i   m   e   -   t   e   x
    0000040   t   -   3   /  \n   /   h   o   m   e   /   v   i   n   1   0
    0000060   0   /   .   c   o   n   f   i   g   /   s   u   b   l   i   m
    0000100   e   -   t   e   x   t   -   3   /   I   n   s   t   a   l   l
    0000120   e   d       P   a   c   k   a   g   e   s  \n   /   h   o   m
    0000140   e   /   v   i   n   1   0   0   /   .   c   o   n   f   i   g
    0000160   /   s   u   b   l   i   m   e   -   t   e   x   t   -   3   /
    0000200   L   o   c   a   l  \n   /   h   o   m   e   /   v   i   n   1
    0000220   0   0   /   .   c   o   n   f   i   g   /   s   u   b   l   i
    0000240   m   e   -   t   e   x   t   -   3   /   L   i   b  \n   /   h
    0000260   o   m   e   /   v   i   n   1   0   0   /   .   c   o   n   f
    0000300   i   g   /   s   u   b   l   i   m   e   -   t   e   x   t   -
    0000320   3   /   C   a   c   h   e  \n   /   h   o   m   e   /   v   i
    0000340   n   1   0   0   /   .   c   o   n   f   i   g   /   s   u   b
    0000360   l   i   m   e   -   t   e   x   t   -   3   /   P   a   c   k
    0000400   a   g   e   s  \n
    0000405
    

    with -print0 using od -c.

    $ find ~/.config/sublime-text-3/ -maxdepth 1 -print0 | od -c
    0000000   /   h   o   m   e   /   v   i   n   1   0   0   /   .   c   o
    0000020   n   f   i   g   /   s   u   b   l   i   m   e   -   t   e   x
    0000040   t   -   3   /  \0   /   h   o   m   e   /   v   i   n   1   0
    0000060   0   /   .   c   o   n   f   i   g   /   s   u   b   l   i   m
    0000100   e   -   t   e   x   t   -   3   /   I   n   s   t   a   l   l
    0000120   e   d       P   a   c   k   a   g   e   s  \0   /   h   o   m
    0000140   e   /   v   i   n   1   0   0   /   .   c   o   n   f   i   g
    0000160   /   s   u   b   l   i   m   e   -   t   e   x   t   -   3   /
    0000200   L   o   c   a   l  \0   /   h   o   m   e   /   v   i   n   1
    0000220   0   0   /   .   c   o   n   f   i   g   /   s   u   b   l   i
    0000240   m   e   -   t   e   x   t   -   3   /   L   i   b  \0   /   h
    0000260   o   m   e   /   v   i   n   1   0   0   /   .   c   o   n   f
    0000300   i   g   /   s   u   b   l   i   m   e   -   t   e   x   t   -
    0000320   3   /   C   a   c   h   e  \0   /   h   o   m   e   /   v   i
    0000340   n   1   0   0   /   .   c   o   n   f   i   g   /   s   u   b
    0000360   l   i   m   e   -   t   e   x   t   -   3   /   P   a   c   k
    0000400   a   g   e   s  \0
    0000405
    

    The output of -print0 doesn’t contain EOL at the end. Instead, it terminates with a null character $'\0'. This allows the read -d '' command with the null character $'\0' as the delimiter to read the last item in the output of -print0.

  • -exec: Execute commands for each matching file: find [PATH] [FLAG(S)] -exec [CMD]

    $ find archetypes -exec file {} \;
    archetypes: directory
    archetypes/default.md: ASCII text
    

    {} \; is necessary for representing an instance of matching file.

    -exec expects a bash command instead of an if-else statement or a for loop. Therefore, there’s no way to place them under -exec unless they are wrapped with sh -c. However, I’ve never tried this since I’ve no idea how to put {} \; inside sh -c.

More options:

  • type [d|f|l]: file type

      type
    d directory
    f file
    l symbolic link
  • -mindepth [n], -maxdepth [n]: works like du’s -d flag.

  • -and: AND operator, allow conjunction of -print(0) and -exec.

  • -path [PATH]: matches PATH.

  • -o: OR operator, useful when used with -prune

  • -prune: return TRUE when a directory is matched. This is useful to exclude path when used with -path.

    $ ls static
    css  google191a8de293cb8fe1.html  img
    $ find static -path static/css -prune -o
    find: expected an expression after '-o'
    $ find static -path static/css -prune -o -print
    static
    static/img
    static/img/globe.gif
    static/img/file-compress.ico
    static/img/globe16.ico
    static/google191a8de293cb8fe1.html
    
  • -[a|c|m][time|min] [+|-][n]: operate on recently X‘ed file last n Y ago. n should be an integer.

      X
    a accessed
    c status changed
    m modified

     

      Y
    time days
    min minutes
  • can be piped to a while loop with delimiter -d'' for batch file operations.

    find . -print0 | while IFS= read -d '' -r file; do; done
    

    To see why we need to specify the null character twice (once after IFS= and once after read -d), see the explanation for while in Bash.

git

The most popular VCS (version control system). Here’s a minimal collection of commands needed for starters. I’ve put more advanced Git commands in a separate page.

getting started

start from cmd
scratch git init
existing project git clone <source_url> <target_path>

add files to be tracked

Basic usage:

goal cmd
add some files git add <PAT>
add all files git add .

git add adds files contents from the working tree to the index.

Technically, git add stages the file(s) matching the <PAT>tern to the index for committing.

Adding empty folder is forbidden.

Here’s a useful trick to list modified non-ignored (tracked and untracked) files. -n stands for --dry-run.

$ git add -An .
add 'content/page/bash-commands/index.md'

Note that the behaviour of git add . has changed since Git 2.0.

unstage files

Avoid committing changes in certain file(s) to the version history. To be used for files in progress.

$ git reset HEAD -- <unready_file>

Update: A newer command git restore is available since Git 2.23.0. It’s more intuitive, comprehensible and specific than git checkout.

$ git restore -- <pathspec>

(re)move files

To rename files, use git mv <PAT>, which mv the file(s) matching the PATtern and then “inform” of this operation. (stage the mv operation)

To remove (unnecessary) files from , consider git rm, which rm the file(s) matching the PATtern and then “inform” of this operation. (stage the rm operation)

Keep the file? example cmd meaning
RSA/GPG secret keys git rm --cached <PAT> remove matching file(s) from only
system generated files git rm <PAT> remove matching file(s) from and file system

ignore files

Simply put the files that you don’t wish to be tracked into ./.gitignore, one file per line. will read this GITIGNORE file take it into account.

show status

If you aren’t sure whether your files are changed, use this.

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>…" to update what will be committed)
  (use "git checkout -- <file>…" to discard changes in working directory)

    modified:   content/page/bash-commands/index.md

no changes added to commit (use "git add" and/or "git commit -a")

You may run git diff / git diff --cached to view the unstaged / staged differences.

commit changes

Write the changes on the staged files to the history. Vim will pop up by default.

$ git commit

If you don’t want to work with Vim, you may either

  • configure the variable core.editor, or

  • input your commit message using STDIN.

    $ git commit -m "commit message"
    

In case that you want to abort a Git operation that automatically opens a Vim session (e.g. git commit, git rebase, etc), exit Vim with an error code with :cq.

view changes

  • show commits: git log
  • show code change: git show
  • view project at a certain commit: git checkout <SHA1-hash>

At this stage, one will be able to manage his/her own repo locally.

search tracked contents

Use git grep -- <path>. Most options for git-grep are identical with those for grep, except the absence of -R, since Git works with tracked files, which are rarely, from my experience, simlinks.

work with different parallel versions

  • branching: for testing new features on a new branch without cracking the core stuff

    $ git branch <branch-name>
    

    New syntax since Git 2.23.0:

    $ git switch -c <branch-name> <source-branch>
    
  • check branch(es): git branch only shows local branch(es)

    • -a: show remote-tracking branch(es) also

    • -v: show the SHA-1 hash at the tip of branch

    • -vv: show also each branch’s remote-tracking branch to which it’s associated, and it’s comparison with that branch in terms of the number of commits ahead/behind that branch.

      $ git branch -vv
      * master 185207a [origin/master: ahead 5] Bash: curl & git
      
  • switch between branches: git switch <branch-name>

  • delete branch: git branch -d <branch-name>

  • rename branch: git branch -m <old-branch> <new-branch>

  • compare branches/commits: git diff <branch1>..<branch2>

    • two dots: usual diff -u

      $ git merge
      $ git diff HEAD^..HEAD  # view changes introduced by previous merge
      
    • three dots: function like two dots, but compare their common ancestor with branch2 instead, useful for checking merge conflicts.

    The two arguments for git diff can technically be any Git references (commits, branches, tags).

  • merge changes from other local branch: git merge <branch-name>

work with others

If you want to save your code to , , , or send upload your repo to an SSH remote for sharing.

  1. Add/set remote:
    • Existing repo: git remote set-url <remote-name> <addr>
    • New repo started from scratch: git remote add <remote-name> <addr>
    • <remote-name>: take this to be origin for the first time if you’re newbie.
    • <addr>: an HTTPS/SSH remote. For example,
      • SSH: git@gitlab.com:VincentTam/vincenttam.gitlab.io
      • HTTPS: https://gitlab.com/VincentTam/vincenttam.gitlab.io
  2. Submit your changes:
    • first time: git push -u <remote-name> <branch-name>. Set upstream branch to track in remote followed by sending out your commits.
    • afterwards: git push
    • delete remote branch: git push <remote-name> :<branch-name>
    • There’s no way to rename a remote branch. Stack Overflow suggests cloning and removing the target branch instead.
  3. Get updates from others:
    • git fetch <remote-name> <branch-name>: download a remote branch without merging it against the current branch.
    • git checkout <remote-name>/<branch-name>: view a remote branch
    • git checkout -b <loc-branch> <remote-name>/<remote-branch-name>: create a local branch from a remote one so that the former tracks the later. (Newer syntax: git switch -c <loc-branch>, -c means --create)
    • git merge <remote-name>/<branch-name>: merge it against the current branch.
      • --ff: fast-forward if possible, otherwise create merge commit. default behaviour.
        • If FETCH_HEAD is a descendant of HEAD, then the history will linear. The HEAD is fast-forwarded to FETCH_HEAD.
        • Otherwise, a merge commit will be created.
      • --no-ff: always create a merge commit
      • --ff-only: fast-forward only. If it’s not possible, abort the merge with an error code.
      • --squash: group all remote branch commits into one commit then append it to HEAD.
        • no merge relationship is made
        • no commit produced right away. Either add git commit -m "…" or use git merge --commit instead. (--commit is incompatible with --squash)
        • refer to Stack Overflow’s comparison on “squash and merge” (git merge --squash) and “rebase and merge” (git rebase, the --merge strategy is implied from the absence of -s).
    • git pull <remote-name> <branch-name>: perform git fetch and git merge.
    • git pull --rebase/git pull -r: perform git fetch and git rebase to get a linear commit history instead of multiple parents in the HEAD commit. Useful when you’ve already committed to a branch whose upstream has also changed, and you want a linear commit history.
      1. fetch commits from remote (git fetch)
      2. modify commit history by applying local commits on top of remote ones (git rebase).
    • abandon a merge: git merge --abort
    • revert a single commit
      • non-merge commit: git revert <commit>
      • merge commit: in git revert -m [1|2] <commit>, -m selects either the first or the second parent of <commit>.
    • See my GitHub pull request tutorial to get the latest features in development from GitHub.
  4. Save your unfinished work when switching to another branch having a different versions of the current unstaged file(s).
    • git stash: save your unfinished work.
    • git stash pop: apply and delete the most recent stashed changes.
    • git stash apply: apply and keep the most recent stashed changes.
    • git stash list: list the stashes.
  5. work with submodules: a submodule is an independent Git repo under the main Git repo. It’s often a component on which the project depends. In the case of this site, it’s the themes that it uses.
    • add a submodule

      git submodule add [SUBMODULE_URL] [PATH]
      
    • fetch submodule(s) after clone

      git submodule update --init --recursive --remote
      
    • update existing submodule(s)

      git submodule update --recursive --remote
      

modifiy history with git-rebase

Given the following graph.

        - A - B - C  topic*
      /
D - E - F - G  master

On the branch topic, these two commands do the same thing.

$ git rebase master
$ git rebase master topic

They give the following graph.

               - A' - B' - C'  topic*
              /
D - E - F - G  master

I find a great example that explains the use of --onto.

D - E - F - G  master
      \
       - A - B - C  server
          \
           - H - K  client

The following three-argument command seems scary at first glance.

$ git rebase --onto master server client

What it does:

  1. Find the common ancestor of 2nd & 3rd arguments. (i.e. A)
  2. Extract the branch A..K excluding A. (i.e. H - K)
  3. Place the extracted branch on top of 1st argument. (i.e. G)

Result:

               - H' - K'  client
              /
D - E - F - G  master
      \
       - A - B - C  server

N.B.: H and K is still accessible with their SHA-1 hash, but the branch name client will point to H' instead of H after this Git rebase.

Now it’s easy to understand what the following command does.

$ git rebase -i HEAD~N

It opens an editor session (--interactive mode) containing N commits (from HEAD~{N-1} to HEAD in chronological order). It’s used for modifying the last N commits on the current branch.

The commands git commit --fixup=<commit> and git rebase -i --autosquash <commit>~ can be used to fix a commit then rebase it into current branch.

git-annex

A librarian like VCS on binary files through symbolic links.

Similar to git, I’ll only put the necessary commands to keep a repo running.

getting started

$ git init
$ git annex init "machine1"

add files

$ git annex add <file>

commit staged changes

$ git commit -m "<commit-msg>"

After git annex add, one can operate on file as in git.

add local remote

For example, backup data on a usb drive.

  1. Get git-annex started on usb.
  2. Add usb as remote on machine1: git remote add usb <path>
  3. and vice versa: git remote add machine1 <path>
  4. Sync remotes: git annex sync [--remote=<remote-name>]
  5. Send some files to usb: git annex sync --content

Working with the “library”:

  • Update catalogue: git annex sync
  • Catalogue check: git annex whereis <file>
  • Demand a copy: git annex copy <file> --from usb
  • Drop unnecessary copies: git annex drop <file>

If one of the remote is down, git annex sync throws an error. To avoid this, set up --remote=….

Before dropping a copy, ask git annex whereis <file>’s other copies.

If, for some reasons, you don’t want git-annex to touch your project, add an empty file .noannex at root level. This is useful if your project is being managed by other software. (say, rsync)

Further reading:

  1. Lee Hinman’s tutorial
  2. Git annex — un cas d’utilisation
  3. Git annex’s official walkthrough

gpg

GNU Privacy Guard

You may replace foo with your short key ID (the last 8 hex digits) of the entire key hex ID. Here’s my conventions:

  • foo.pub: public key

  • foo.key: secret key

  • foo.rev: revocation certificate

  • msg.gpg: symmetrically encrypted message in binary output

  • msg.asc: symmetrically encrypted message in armor output

  • generate a new keypair: gpg --gen-key

  • generate a revocation certificate:

    gpg --output foo.rev --gen-revoke mykey
    

    This should be done immediately after the previous step. In case of a lost of mykey’s passphrase, its public key can be revoked. The certificate is in armor output.

  • list public keys: gpg --list-keys (or gpg -k)

  • list secret keys: gpg --list-secret-keys (or gpg -K)

  • export a public key to someone

    gpg --output foo.pub --armor --export foo@bar.com`
    
    • --output foo.pub can be omitted for output to stdout.
    • --armor: ASCII output
  • export a secret key to someone

    gpg --output foo_secret.gpg --armor --export-secret-keys foo@bar.com
    
  • import a public key (from other devices/another person, to somewhere in ~/.gnupg/)

    gpg --import foo.pub
    

    The --import option can also be used for secret keys and revocation certificate.

  • encryption

    gpg --output msg.txt.gpg --encrypt --recipient foo@bar.com msg.txt
    

    Add --armor for armor output.

    gpg --output --armor msg.txt.asc --encrypt --recipient foo@bar.com msg.txt
    
  • decryption

    gpg --output msg.txt --decrypt msg.txt.gpg
    

    This command is much shorter than the previous one because during encryption, recipient’s public key is used. His/her secret key in the keyring (in ~/.gnupg/) is used during decryption.

  • symmetrically encrypt msg.txt

    gpg --symmetric msg.txt
    

    The command for decryption is the same. The system will prompt for the passphrase used for encryption.

  • publish you key to key server:

    • GnuPG has provided --send-key(s) for this.

      gpg --keyserver certserver.pgp.com --send-key keyid
      

      The keyid is a 0x hex no. in the key list. To send multiple keys, use --send-keys instead.

    • OpenPGP’s usage page has given another command (that I’ve never tried).

      gpg --export foo@bar.net | curl -T - https://keys.openpgp.org
      

      From I’ve comprehended in the linked explanation, the public key uploaded using the first way isn’t searchable by email address.

  • sign a file (e.g. someone else’s key)

    gpg --output key.gpg.sig --sign key.gpg
    
  • receive a key from key server

    • known key ID:

      gpg --keyserver keys.openpgp.org --recv-key[s] keyid [keyid2…]
      
    • search and import a public key by an email address:

      gpg --auto-key-locate hkp://keyserver.ubuntu.com --locate-keys foo@bar.net
      

      Without hkp://, I got the message “gpg: invalid auto-key-locate list”.

  • publish someone else’s key that you’ve signed to key server

    gpg --keyserver certserver.pgp.com --send-key someone@else.com
    

If you’ve imported a(n activated) revocation certificate (by gpg --import foo.rev), the key foo would be revoked. If this revoked key is then published to the key server, then it’s definitely revoked, and this action is irreversible.

$ gpg -k
/c/Users/sere/.gnupg/pubring.kbx
--------------------------------
pub   rsa4096 2021-05-13 [SC]
      B80AA49DC3E380C7F0DB11D838D149AF01EC4A03
uid           [ultimate] Vincent Tam <xxxx@live.hk>
sub   rsa4096 2021-05-13 [E]

pub   rsa4096 2018-06-28 [SC] [revoked: 2018-06-28]
      0E1E02210A8EBED57214DC638330E599BA36B922
uid           [ revoked] Vincent Tam <xxxx@live.hk>

As long as this revoked key is not published, it’s possible to reverse the import of the revocation certificate foo.rev.

gpg --expert --delete-key BA36B922
# confirm by pressing y
# fetch the public key again from public key server
gpg --keyserver keyserver.ubuntu.com --recv-keys BA36B922

: a GPG question on Super User

grep

Get regular expression and print.

General usage:

  • read from file(s): grep [FLAG(S)] [PATTERN] [FILE]
  • pipe from STDIN: [cmd] | grep [FLAG(S)] [PATTERN]
  • read from STDIN:
    1. Type grep [PATTERN].

      $ grep foo
      
    2. Input the content to be searched. After you’ve finished, terminate the line with newline ↵.

      $ grep foo
      foobar
      
    3. Your search content is duplicated, with the matching part highlighted.

Flags explanations: (in alphabetical order)

  • -A [n]: print n lines after the matching line

  • -B [n]: print n lines before the matching line

    $ grep -B 3 Author config.toml
    #  src = "img/hexagon.jpg"
    #  desc = "Hexagon"
    
    [Author]
    
  • -C [n]: use this when the arguments for -A and -B are the same

  • -c: output the number of matching lines instead of the matching parts

    $ grep -c menu config.toml
    8
    
  • -E: use regular expression in the search string

    $ git grep -E '(PDF|Lua)LaTeX'
    content/page/simple-document-templates/index.md:title="LuaLaTeX allows using sys
    tem fonts"
    content/post/2022-06-04-dvisvgm-s-issue-with-fill-pattern/index.md:1. TeX → PDF
    with PDFLaTeX
    
  • -i: ignore case

  • -I: ignore binary files

    $ cd content/post/2018-07-23-fujitsu-lh532-keyboard-cleaning
    $ grep if *
    Binary file 20180708_234331_HDR.jpg matches
    Binary file 20180709_001359_HDR.jpg matches
    index.md:Finally the N key was fixed: *no* noticeable difference from other norm
    al size
    

    Use this flag to ignore the JPG files (and match the text files only).

    $ grep -I if *
    index.md:Finally the N key was fixed: *no* noticeable difference from other norm
    al size
    
  • -l: show matched file path

  • -L: show unmatched file path

  • -n: also show matching line number

    $ grep -n menu config.toml
    70:[[menu.main]]
    108:[[menu.main]]
    
  • -o: print only matching part. Useful for string extraction.

  • -P: Perl mode, useful for lazy match (.*? for minimum expansion).

  • -r: recursive

    $ grep -r bl[au] content static
    content/post/2018-06-28-xubuntu-dualboot-on-fujitsu-lh532.md:fa-laptop" aria-hid
    den></i>, Windows 10 often falls into blue screen.
    content/post/2018-07-04-fujitsu-lh532-fan-cleaning/index.md:blanket can put a ti
    ny electronic component into intensive care.  To
    content/post/2018-08-23-brighten-image-with-gimp/index.md:B | <i class="fa fa-sq
    uare blue"></i>--<i class="fa fa-square yellow" aria-hidden></i>
    static/css/custom.css:.blue {color: #0000FF;}
    
  • -R: like -r, but dereference links

  • -v: inverse match

  • -w: match whole word, similar to \< and \> in Vim.

  • -q: quiet

grep doesn’t work for multi-line regex match. Use sed or awk instead.

Print first n lines of a file or STDOUT. (n = 10 by default) It works like but opposite of tail.

  • -n [n]: print first n lines

  • -c [n]: print first n characters

  • if a plus sign - is used in front of [n] in -n [n] / -c [n], it will output from the file with the last [n] word(s)/character(s) removed.

    $ seq -s ' ' 10 | od -bc
    0000000 061 040 062 040 063 040 064 040 065 040 066 040 067 040 070 040
            1       2       3       4       5       6       7       8    
    0000020 071 040 061 060 012
            9       1   0  \n
    0000025
    $ seq -s ' ' 10 | head -c 10 | od -bc
    0000000 061 040 062 040 063 040 064 040 065 040
              1       2       3       4       5    
    0000012
    $ seq -s ' ' 10 | head -c +10 | od -bc
    0000000 061 040 062 040 063 040 064 040 065 040
              1       2       3       4       5    
    0000012
    $ seq -s ' ' 10 | head -c -10 | od -bc
    0000000 061 040 062 040 063 040 064 040 065 040 066
              1       2       3       4       5       6
    0000013
    

hexdump

directory Display binary files as blocks of hexadecimal numbers.

  • -c: character

See alse: od

ifconfig

Display and/or modify connection info.

  • connections
    • e…: ethernet, wired connection
    • lo: localhost
    • w…: Wi-Fi
  • inet: internal IP address

To get external IP address, one needs to send a request to an external server . See wget for the command to do so.

info

More informative than man.

General usage: info [cmd]

less

View without editing the file in a separate popup session with some Vim like key bindings.

General usage: less [FILE(S)]

$ less config.toml

Compare with:

  • more: leave nothing on the screen after finish reading [FILE], ideal for text/code files / with a lot of lines.
  • vim: some keys are borrowed from Vim, loads faster due to its limited functionalities. This suits previewing important files, especially system files.
Key Function
b Scroll one page backward
f Scroll one page forward
d Scroll half page down
u Scroll half page up
g Jump to first line
G Jump to last line
j Move the cursor one line down
k Move the cursor one line up
/ Forward search
? Backword search
n Next match
N Previous match
h Show help page
q Quit

Some of the above keys can be quantified by prepending a number as in Vim.

Thanks to Stephan’s answer on Unix.SE, I’ve learnt the navigation across files in less.

command explanation
:n[N] go to the next N file. (N defaults to 1.)
:p[N] go to the previous N file. (N defaults to 1.)

ls

List files in a directory .

Default bash aliases:

Flags added
l -CF
la -A
ll -alF

Flags explanations: (in alphabetical order)

  • -a: all

  • -A: almost all (except current directory . and parent directory ..)

  • --block-size=[SIZE]: SIZE represents a unit (B, K, M, G).

  • -d I don’t understand what “list directories themselves, not their contents” means, as it can display the contents of a directory. Running this on any folder containing files and subfolders illustrates my doubts.

    # see what '-d' does on current path '.'
    $ ls -ld
    drwxrwxr-x 9 vin100 vin100 4096 Aug 27 22:04 .
    
    # see what '-d' does on every non-hidden folders and files `*`
    $ ls -ld *
    drwxrwxr-x 2 vin100 vin100 4096 Jun 28 21:00 archetypes
    -rw-rw-r-- 1 vin100 vin100 2734 Aug 28 17:30 config.toml
    drwxrwxr-x 4 vin100 vin100 4096 Jun 29 00:47 content
    drwxrwxr-x 3 vin100 vin100 4096 Aug 25 15:29 layouts
    drwxrwxr-x 9 vin100 vin100 4096 Jun 29 00:47 public
    drwxrwxr-x 4 vin100 vin100 4096 Aug 25 13:36 static
    drwxrwxr-x 3 vin100 vin100 4096 Aug 27 18:06 themes
    
    # take away '-d', compare the two outputs
    $ ls -l *
    -rw-rw-r-- 1 vin100 vin100 2734 Aug 28 17:30 config.toml
    
    archetypes:
    total 4
    -rw-rw-r-- 1 vin100 vin100 84 Jun 28 21:00 default.md
    
    content:
    total 12
    -rw-rw-r-- 1 vin100 vin100  279 Aug 28 02:23 _index.md
    drwxrwxr-x 3 vin100 vin100 4096 Aug 28 17:18 page
    drwxrwxr-x 9 vin100 vin100 4096 Aug 23 13:36 post
    

    I use -d with * to keep ls from expanding the subfolders. .

man

Display manual page in a separate session. Less detailed than info.

General usage: man [num] [cmd].

mkdir

Make directory.

more

Works like cat, but for files with more lines. Opposite of less.

General usage: more [FILE]

npm

Node.js® Package Manager: manage different versions of Node.js packages.

  • install <package>: install <package> from the official repo.
    • -g: install globally under user’s home file.
    • otherwise: install locally under the current working directory.
  • version: list installed packages with their version.
  • --version: list NPM’s version.
  • help: get concise help.
  • -l: get a full list of options.

nvm

NPM Version Manager: manage different versions of NPM. Enable easy switch between different versions of NPM. Useful for cross-platform development.

  • install <vnum>: install NPM at version <vnum>.
  • install-latest-npm: try to upgrade to the latest working NPM.
  • use <vum>: use NPM at version <vnum>.
  • run <vum> <app> run <app> at NPM version <vnum>.
  • help: display help.
  • ls: display installed versions of NPM.
  • current/version: display current active version of NPM.
  • --version: display installed version of NVM.

od

Display binary files as blocks of octal numbers.

  • -c: output human-readable characters instead of decimals, same as -t c.

  • -A [doxn]: set file offset (i.e. position of the file relative to the start) display mode (observable from the leftmost column). A counterpart of the line number display in usual text files.

    mode offset format
    d decimal
    o octal
    x hexadecimal
    n none
  • -t [TYPE]: control the output format

    TYPE select …
    a named characters, ignoring high-order bit
    c printable character / backslash escape
    d[SIZE] (I don’t know how to use signed decimals.)
    f[SIZE] (I don’t know how to use floats.)
    o[SIZE] octal SIZE-byte units
    u[SIZE] unsigned decimal SIZE-byte units
    x[SIZE] hexadecimal SIZE-byte units

    Additional [SIZE]s:

    TYPE SIZE for …
    [doux] C char
    [doux] S short
    [doux] I int
    [doux] L long
    f F float
    f D double
    f L long double

    z after [TYPE] allows the display of a >…< line showing the characters read in a human-readable format (or . otherwise, say for \r.)

    In case of multiple -t, each format will be displayed in a new line. Different formats corresponding to the same offset will be aligned.

  • -w[BYTES]: control the width by printing BYTES bytes per line in the output. In the manual, it says the default value is 32, but my testing on both Git Bash and M$ WSL shows that the default is -w16 instead of -w32.

    To illustrate this finding, I’m using a simple for loop that doesn’t work on SH.

    $ (for c in {a..z}; do printf '%c' $c; done;) | od -t o1 -c
    0000000 141 142 143 144 145 146 147 150 151 152 153 154 155 156 157 160
              a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p
    0000020 161 162 163 164 165 166 167 170 171 172
              q   r   s   t   u   v   w   x   y   z
    0000032
    

    This is the same as the following.

    $ (for c in {a..z}; do printf '%c' $c; done;) | od -w16 -t o1 -c
    0000000 141 142 143 144 145 146 147 150 151 152 153 154 155 156 157 160
              a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p
    0000020 161 162 163 164 165 166 167 170 171 172
              q   r   s   t   u   v   w   x   y   z
    0000032
    

    When it’s changed to -w32, it becomes two long rows.

    $ (for c in {a..z}; do printf '%c' $c; done;) | od -w32 -t o1 -c
    0000000 141 142 143 144 145 146 147 150 151 152 153 154 155 156 157 160 161 162
    163 164 165 166 167 170 171 172
              a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p   q   r
      s   t   u   v   w   x   y   z
    0000032
    

Reminder:

  • 1 bit is either 0b0 (i.e. 0₂) or 0b1 (i.e. 1₂)
  • 1 byte = 2³ bits (e.g. 0b10110011 / 10110011₂) = 2 hexadecimals
  • 1 ASCII character = 1 byte (e.g. H = 0x48 / 48₁₆ = 70 / 70 ₁₀)

See also: hexdump

openssl

OpenSSL tools

I only know two usages.

  1. random base64 encoded string generation.

    $ openssl rand -out output.txt -base64 12
    $ cat output.txt
    0DNyuVqNxecZI+Fo
    
  2. RSA key generation

    Generate a 2048-bit RSA key pair and output key.pem

    openssl genrsa -out key.pem 2048
    

    To add extra security to the secret key, it may be encrypted with -des3.

    To export a corresponding public key of a secret key, use -pubout.

    openssl rsa -in key.pem -outform PEM -pubout -out pubkey.pem
    

pdfcrop

pdfcrop --margins '-30 -30 -250 -150' --clip input.pdf output.pdf
  • --margins '-L -T -R -B'
    • negative margins for cropping input.pdf
    • postive margins for white spaces around input.pdf
    • single number for the four margins of the same width
  • --clip enables clipping support. I don’t know what’s that for.

For more information, see PDFcrop’s man page.

pdfgrep

Grep string from PDF file(s). .

General usage:

  • Single file: pdfgrep "foo" bar.pdf
  • Multiple files: pdfgrep -R "foo" ./some/path

pdftoppm

Convert PDF to PPM/PNG/JPEG images.

Usage:

pdftoppm [options] [PDF-file [PPM-file-prefix]]

PDF-file should contain the file extension name .pdf.

In case of missing PPM-file-prefix, it outputs to STDOUT.

  • output format: default to PPM
    • -png (resp. -jpeg and -tiff): switch to PNG (resp. JPEG and TIFF).
    • -mono (resp. -gray): switch to monochrome PBM (resp. grayscale PGM).
  • page range control:
    • -singlefile: convert the first page only, do not add -%d in PPM-file-prefix. Useful for single-page PDF (e.g. article.pdfarticle.png, not article-1.png)
    • -f [START]: first page used for conversion
    • -l [START]: last page used for conversion
    • -o: convert only odd numbered pages.
    • -e: convert only even numbered pages.
  • resolution: default to 150, which I find insufficient. I suggest using at least 450.
    • -r [NUM] (resp. -rx [NUM] and -ry [NUM]): set resolution (resp. x-resolution and y-resolution) to NUM.
  • output scaling: scale to fit a given dimension.
    • -scale-to [NUM]: scales each page’s long side to NUM.
  • decrypt PDF
    • -opw (resp. -upw): input encrypted PDF-file’s owner (resp. user) password.

pgrep

ps + grep

Basic usage: pgrep [PROC]. Only one pattern can be provided.

  • -f: match full pattern instead of process name
  • -l: show PROC at the 2nd column

It took me a while to understand what full pattern meant. Process 2339 bears the name Web Content, but it’s triggered by firefox. Viewing its command (by ps aux), I’ve understood why this process appeared in the output of pgrep -fl firefox.

$ pgrep -l firefox
2292 firefox
$ pgrep -lf firefox
2292 firefox
2339 Web Content
17363 Web Content
$ ps aux | grep 2339
vin100    2339 24.4  5.2 2230872 419804 ?      Sl   01:43  28:34 /usr/lib/firefo
x/firefox -contentproc -childID 1 -isForBrowser -prefsLen 10155 -schedulerPrefs
0001,2 -parentBuildID 20180905220717 -greomni /usr/lib/firefox/omni.ja -appomni
/usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 2292 true tab
vin100   26892  0.0  0.0  22008  1104 pts/1    S+   03:40   0:00 grep --color=au
to 2339

Remarks: This command is unavailable on Git Bash.

printf

Print string according to the format string. It’s useful when you have to display constant string(s) concatenated with string value of shell variable(s).

Usage: printf [-v VAR] [FMT_STR] [ARGS]

  • -v VAR: output to shell variable VAR instead of STDOUT. If it’s omitted, printf will output to STDOUT.
  • FMT_STR: format string
    • %d / %i: decimal
    • %[MIN_WIDTH].[PREC]f: floating-point decimal with precision PREC decimal points and the whole string with minimum width MIN_WIDTH characters
  • ARGS: variables referenced in the FMT_STR

ps

Prints process status.

I only know ps aux, which displays every process. ps ux only shows jobs owned by root.

  • -C [cmd]: show process cmd only

qpdf

Split/merge/rotate/encrypt/decrypt PDF files.

Generic usage:

  • --linearize: optimize file for web.

  • --rotate=[+|-]angle:page-range: possible parameters for angle are 90, 180 and 270.

  • --split-pages=n [INPDF] [OUTPDF]: split file into groups of n pages. n defaults to 1.

    $ qpdf --split-pages foo.pdf bar.pdf
    $ ls bar.pdf
    bar-01.pdf  bar-03.pdf  bar-05.pdf  bar-07.pdf  bar-09.pdf  bar-11.pdf
    bar-02.pdf  bar-04.pdf  bar-06.pdf  bar-08.pdf  bar-10.pdf  bar-12.pdf
    

    An optional string %d in OUTPDF is replaced with a zero-padded page.

    $ qpdf --split-pages twopage.pdf o%dut.pdf
    o1ut.pdf  o2ut.pdf
    
  • --empty: discard INPDF metadata

  • --pages [INPDF] [--password=PASSWORD] [RANGE] [[INPDF2] [RANGE]…] --: extract pages

    • RANGE: apart from the basic 1-3,5, you may reverse selection order.

      • z: last page
      • r[n] last n page
      • 3-1: page one to three in reverse order
      • :even/:odd: optional suffix for even/odd pages
      • empty: select all pages (starting with qpdf 5.0.0)
    • INPDF*: multiple files don’t need to be separated by --. (The command in Mankier is wrong.)

      $ qpdf --empty --pages input1.pdf 1,6-8 input2.pdf -- output.pdf
      
    • --password=PASSWORD to access file protected with PASSWORD.

  • --encrypt [USER_PASSWORD] [OWNER_PASSWORD] [KEY_LENGTH]: encryption

    $ qpdf --empty --encrypt --pages input1.pdf 1-z:even input2.pdf -- \
    output.pdf
    

    The user password is for the reader, and the owner password is for the owner. For a more fine-grained control (e.g. restrict picture extraction), view the manual, which says it’s possible to use an empty string for user password and a nonempty one for owner password.

  • --decrypt: remove password protection, often used with --password=[PASSWORD].

References:

  1. Official manual
  2. Mankier

rpm

An RPM cheat sheet on Cyberciti.

rsync

Remote synchronization. Supports many protocols like FTP, SSH, etc.

General usage: rsync [FLAGS] [SRC] [DEST].

  • SRC: source
    • files
    • folders
      • with trailing /: send contents inside the folder
      • without trailing /: send whole folder
  • DEST: destination, a file path

Flags explanations: (in alphabetical order)

  • -a: archive, preserve file attributes, permissions and [acm]times

  • -u: update, only send file(s) in SRC newer than DEST

  • -v: verbose, print every operations to STDOUT

  • -z: zip, reduce transferred data.

  • --exclude='[SUBDIR]': exclude [SUBDIR]

    To quickly test a Hugo theme without git submodule, I use this command.

    rsync -auv --exclude='themes' --exclude-from='.git*' ~/bhdemo-public/ ~/bhdemo
    
  • --exclude-from='[SUBDIR-LIST]': exclude contents listed in the file [SUBDIR-LIST].

References: Six rsync examples from The Geek Stuff

screendump

Record a TTY session into a text file (with one single long line). Require sudo privileges.

$ sudo screendump 1 > ~/mylog.txt
[sudo] password for vin100:
$ cat ~/mylog.txt

Ubuntu 18.04.1 LTS vin100-LIFEBOOK-LH532 tty1

vin100-LIFEBOOK-LH532 login:

sed

Stream editor

$ sed [FLAG(S)] '[range][cmd]'

Flags explanations: (in alphabetical order)

  • -e: extended mode, use for multiple expressions
  • -i: in place editing
  • -n: no normal output

[range] can refer to a line number ($ meaning the last line), or a scope /[PAT]/. The later can be used to remove empty lines.

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>…" to update what will be committed)
  (use "git checkout -- <file>…" to discard changes in working directory)

        modified:   content/page/bash-commands/index.md
        modified:   content/page/sublime/index.md

no changes added to commit (use "git add" and/or "git commit -a")
$ git status | sed '/^$/d'
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>…" to update what will be committed)
  (use "git checkout -- <file>…" to discard changes in working directory)
        modified:   content/page/bash-commands/index.md
        modified:   content/page/sublime/index.md
no changes added to commit (use "git add" and/or "git commit -a")

A [range] can be inverted by !, so sed '$!d' works like tail -1.

Some common [cmd]:

  • a: append
  • d: delete
  • n: next: clear pattern space (PS) and go to next line
  • p: print
  • {…}: can be used with pattern like /PAT/{n;p} for conditional operations.

[cmd]’s are separated by semicolon ;.

Some less common [cmd]:

  • N: go to next line, but append newline ↵ and next line to PS, useful for multi-line regex match.
  • q: quit, allows setting status number
  • x: exchange PS and hold space (HS), can be used to detect if the text has been changed.

seq

Print an arithmetic progression with

  • START: start of the sequence (default to 1 if only one argument is provided)
  • CD: common difference (default to 1 if only two arguments are provided)
  • UPPER_BOUND: upper bound of the sequence (mandatory)

Usage: seq [OPTION(S)] START CD UPPER_BOUND

  • -f: use printf-like formats for floating-point numbers. Incompatible with -w.
  • -s: set a string as a separator. (default: newline \n)
  • -w: left-pad zero(es) 0 to unify each term’s display width.
  • --help: print help
  • --version: print version
$ seq -s ', ' -w 10
01, 02, 03, 04, 05, 06, 07, 08, 09, 10
$ seq -s ',' -f '%6.2f' 1.3 2.4 12.1
  1.30,  3.70,  6.10,  8.50, 10.90

shrinkpdf

I’ve added Alfred Klomp’s simple Ghostscript wrapper to ~/bin, which has been added to $PATH. The > is optional. The third argument indicates the quality, and it defaults to 90, which is barely enough for pure text documents .

$ ./shrinkpdf.sh foo.pdf > bar.pdf
$ ./shrinkpdf.sh foo.pdf bar.pdf 150

quality/size tradeoff:

  • 150 suffices for text documents in emails 📧.
  • 200 suffices for documents with text and photos.
  • 300 for usual storage 💾
  • 450 / 600 for superior quality. (e.g. , , etc)

shuf

Shuffle input. (no repetition by default)

Flags explanations: (in alphabetical order)

  • -i: input range

    $ shuf -i 1-10 -n 3
    4
    9
    7
    
  • -n: output number

  • -r: allow repetition

sleep

Start an idle process for n seconds, which n is the first argument.

Remarks: I can’t say that the shell is suspended despite its apparent effect as appending & to the command allows it to run in background.

sort

Sort input file (in ascending alphabetical order by default).

  • -k[n]: sort according to column [n]
  • -n: use numeric order instead
  • -r: reverse order
  • -u: preserve only unique lines after sort

ssh

Secure shell: access remote desktop on the network through an encrypted “tunnel”.

  • simplest syntax: ssh [USER]@[DOMAIN]

    $ ssh vin100@192.168.1.2
    

    The shell then prompts you to input the password for [USER] up. Upon success, you’ll logged in as [USER].

  • one-spot connection: ssh [USER]@[DOMAIN] [CMD]

    $ ssh vin100@192.168.1.2 ls ~/quickstart
    archetypes  config.toml  content  layouts  public  static  themes
    

ssh-add

Add SSH keys to SSH agent. On my personal devices, I rarely type this command to add my SSH secret key

$ eval $(ssh-agent -s) && ssh-add ~/.ssh/id_ed25519

due to the script ~/.bashrc in Working with SSH key passphrases on GitHub’s docs. The linked script prompts for your default SSH secret key’s passphrase if it’s passphrase protected.

On shared devices (say, your company’s desktop, on which you’re a normal user instead of an admin, and your colleagues might have your user account’s user name and password), you may use ssh-add -x to lock your SSH agent, so that the stored and passphrase protected secret key can no longer be used directly. This command will prompt the user for a passphrase (to be confirmed by retyping it again). The passphrase input won’t be shown on the terminal. This is useful when you need to leave your device for a short while without logging off (e.g. take a short break). To unlock your agent, use ssh-add -X.

ssh-keygen

Generate an SSH key pair. They can be used for remote SSH access and Git service providers.

  • -t: algorithm used (rsa/ed25519/dsa/ecdsa). GitLab documentation suggests that one should favor ED25519 if possible. This encryption algorithm is currently supported by GitHub and GitLab.
  • -C: comment, say email address
  • -b: number of bits of the generated key (n/a for ED25519). A length of 4096 is recommended.
  • -f: specify the input file
  • -y: read input OpenSSH secret key and display the OpenSSH public key
  • -o: output secret key in a newer and more secure format. (ED25519 automatically implement the new format, so there’s no need to use this flag.)
  • -l: print the fingerprint of the (public/secret) key file. (Public/secret gives the same result.)
    • -v: show the ASCII art representation instead.
    • -f: see above. If missing, prompt for file path.
    • -E: specify hash algorithm (sha256 (default)/sha1/md5)
  • -p: change passphrase. (-P and -N are omitted, so that the passpharses won’t be logged.)

Examples

  1. Generate an ED25519 key pair.

    ssh-keygen -t ed25519
    
  2. Generate an ED25519 key pair for a Git service provider.

    ssh-keygen -t ed25519 -C "foo@test.com"
    
  3. Change passphrase.

    ssh-keygen -p -o -f <keyname>
    
  4. Generate the corresponding public ED25519 key from a secret one.

    ssh-keygen -yf ~/.ssh/id_ed25519 > ./id_ed25519.pub
    
  5. Get the MD5 hash of the fingerprint of a public SSH key. (Displayed in GitHub/GitLab’s account settings)

    ssh-keygen -E md5 -lf ~/.ssh/id_ed25519
    

Safety precautions:

  1. Encrypting the secret key with a secret passphrase can greatly enhance its security.
  2. Never compromise your secret key to others, including any remote service/server/storage. (e.g. Cloud/USB storage, email) The reason is that a secret key represents an identity. If your friend/relative needs remote SSH access, ask them to create a new key pair.
  3. Use one key pair per device, so that intrusion of a device doesn’t compromise the security of other devices.
  4. It won’t enhance the security by using one key pair per remote server because the only object to be protected is(are) the secret key(s). If ~/.ssh is unluckily intruded, all secret keys will be exposed.

stty

Show info of TTY.

  • Return number of rows and columns

    $ stty size
    43 132
    
  • Set buffer sizes

    $ stty cols 80
    $ stty rows 32
    

See also: tty

tac

Print content of file(s) like cat, but in a reversed manner.

tail

Print last n lines of a file or STDOUT. (n = 10 by default) Opposite of head

  • -c [n]: output last n bytes. Useful for EOL detection at EOF.

  • -n [m]: output last m lines

  • if a plus sign + is used in front of [n] or [m], it will output from offset n to the EOF.

    $ seq 10 | tail -n 4
    7
    8
    9
    10
    $ seq 10 | tail -n +4
    4
    5
    6
    7
    8
    9
    10
    
  • -z: use null character \0 instead of newline ↵ as line delimiter. It can be used to process the output of find … -print0.

tar

Archive directories into a single file called tarball. Options:

  • c: create tarball
  • f: specify filename of the output tarball
  • v: verbose
  • x: extract

Common compression options:

  • j: bzip2
  • J: xz
  • z: gzip
  • Z: compress

tee

Redirect command output into STDOUT and a file. To be used for inspecting and capturing command output simultaneously.

$ ls | tee eles
archetypes
config.toml
content
layouts
public
static
themes
$ cat eles
archetypes
config.toml
content
layouts
public
static
themes

> captures the command output without showing it (except errors).

test

Test expressions or file types. Two types of syntax are possible.

$ test {EXP}
$ [ [OPTIONS ] {EXP}

See the last part of bash for chaining commands with && and ||.

  1. string comparison

    $ [ STRING1 = STRING2 ] && echo true || echo false
    false
    $ [ STRING1 = STRING1 ] && echo true || echo false
    true
    $ [ STRING1 != STRING2 ] && echo true || echo false
    true
    $ [ -n a ] && echo true || echo false  # test string with nonzero length
    true
    $ [ -z "" ] && echo true || echo false  # test string with zero length
    true
    

    Note that to compare the equality of two strings, a pair of single/double quotes ''/"" aren’t necessary, and only one equality sign = is needed.

    For the last two commands, the spaces are very important, and they can’t be omitted.

    The option -n a is equivalent to a. An application of this is the detection of newline \n at EOF.

    $ test $(tail -c1 .gitignore) && echo 'missing EOF!' || echo 'has EOF'
    has EOF
    $ test $(tail -c1 themes/beautifulhugo/static/js/katex.min.js) && \
    echo 'missing EOF' || echo 'has EOF'
    missing EOF
    

    Explanation: (thanks to Oguz Ismail’s answer on Stack Overflow)

    1. The command tail -c1 {FILE} inside the pair of backticks is executed in a subshell environment.
    2. tail -c1
      • the {FILE} has a \n the EOF: gives a trailing \n;
      • otherwise: gives the last (non \n) character.
    3. When the command substitution gets replaced by STDOUT of tail -c1 in step #2, any trailing \n is trimmed off, resulting in
      • first case in #2: an empty string
      • second case in #2: a nonempty string (the character is untouched)
    4. test evaluates the string with the -n option.
      • first case in #2: false (omitted {EXP} defaults to false)
        1. jump to ||
        2. execute echo 'has EOF'
      • second case in #2: true
        1. proceed to &&
        2. execute echo 'missing EOF'. This command exits normally and it gives the status code zero.
        3. meet || and terminate.
  2. =~: binary operator for regex matches

    $ `[[ "foobar" =~ "bar" ]]` && echo matched
    matched
    
  3. compare integers

    $ [ 6 -eq 4 ] && echo equal || echo not equal
    not equal
    

    Possible binary operators are:

    • -eq: =
    • -ne: ≠
    • -gt: >
    • -lt: <
    • -ge: ⩾
    • -le: ⩽

    It’s also possible to use (( … )).

    $ (( 3 == 3 )); echo $?
    0
    $ (( 3 == 4 )); echo $?
    1
    

    You may view Shell Equality Operators

  4. compare two files’ modification date (-nt, -ot)

    $ [ .gitignore -ot config.toml ] && echo older || echo newer
    older
    
  5. test existence of files

    • -d: {FILE} exists and is a directory
    • -e: {FILE} exists
    • -f: {FILE} exists and is a regular file
    • -h/-L: {FILE} exists and is a symbolic link

    In case that {FILE} is a symbolic link, test dereferences it except for -h and -L (because these two options test the existence of symbolic links, so there’s no point dereferencing the links).

time

Record the time taken for running a command.

General usage: time [command]

$ time sleep 5

real    0m5.002s
user    0m0.002s
sys     0m0.000s

tty

Output absolute file path of the current terminal. (no argument needed)

  • GUI: /dev/pts/1
  • TTYn: /dev/tty[n]

See also: stty

tr

Translate or remove a certain characters. Like other GNU coreutils (e.g. cat, grep, etc), it accepts STDIN and/or input file(s), and write to STDOUT.

General usage:

  • Replace character

    $ tr ' ' '_'
    foo bar
    foo_bar
    
  • Delete character

    $ tr -d ' '
    foo bar
    foobar
    

uniq

Output “locallyunique lines. i.e. Remove neighbouring duplicate lines of input file(s) or STDIN.

$ echo '1\n1\n1\n2' | uniq
1
2
$ echo '1\n2\n1\n2' | uniq
1
2
1
2

See also: sort -u

unzip

The reverse of zip.

Basic usage: unzip [options] archive.zip [-x excluded] [-d target/]

  • -d output/path/: extract to destination folder ./output/path. Default to current directory . (in case that this flag is omitted).

  • -x excluded_list: exclude files in excluded_list. For example, -x foo bar exclude both files foo and bar from the extraction.

  • -l: list files without extracting them. Useful for determination of the extraction target destination.

  • -j: get particular file(s) with

    $ unzip -j foo.zip file1_in_foo file2_in_foo
    

    The path(s) to file(s) in the ZIP archive can be checked with -l.

  • -c: output to STDOUT.

  • -f: freshen file only if the ZIP archive’s version is newer. It won’t create new files.

  • -P [pwd]: (use with care because same OS might leak user commands to others, e.g. system admin) decrypt ZIP archive using [pwd].

  • -v: verbose / version info

  • -hh / -h2: display extended help.

vim

Improved text-editor from vi, which is preloaded on every GNU/Linux and FreeBSD distro. (even on Mac OS)

  • -R: read-only mode
Normal mode key Function
<C-b> Scroll one page backward
<C-f> Scroll one page forward
<C-d> Scroll half page down
<C-u> Scroll half page up
g Jump to first line
G Jump to last line
h Move the cursor one character left
j Move the cursor one character down
k Move the cursor one character up
l Move the cursor one character right
/ Forward search
? Backword search
n Next match
N Previous match
i Insert character under the cursor
q Quit

P.S. It was my favorite editor.

wc

Word count

  1. Use files: output character, word and line counts, followed by file name

    $ wc .gitmodules
      4  11 133 .gitmodules
    
  2. Use STDOUT: also show these three counts, but without file name

    $ cat .gitmodules | wc
          4      11     133
    
  • c: character count
  • w: word count
  • l: line count

wget

From web, get stuff (with its resources).

  • -c: works like -C in curl
  • -N/--timestamping: retrieve file only if the server’s copy is newer than the local one. (Thanks to Steven Penny’s answer.)
  • -O: works like -o in curl
  • -q: quiet, don’t output to STDOUT

To get the external IP address, try

wget -qO- http://ipecho.net/plain ; echo

See also: curl

You may refer to the TLDR page for more useful commands like

  1. Download page with its resources
  2. Download full website
  3. Download recursively a remote folder
  4. Download via authenticated FTP

xargs

Rearrange and/or execute arguments.

Output of ls without -l flag is ascending column-wise.

$ ls -A .git
branches        description  hooks  logs     ORIG_HEAD
COMMIT_EDITMSG  FETCH_HEAD   index  modules  packed-refs
config          HEAD         info   objects  refs

xargs -n [num] treats input as arguments delimited by space ␣, tab ↹ and/or newline ↵. It outputs [num] arguments delimited by space ␣ on each line.

$ ls -A .git | xargs -n 3
branches COMMIT_EDITMSG config
description FETCH_HEAD HEAD
hooks index info
logs modules objects
ORIG_HEAD packed-refs refs

Observe the difference of the output below with the first block in this section.

$ ls -A .git | xargs -n 3 | xargs -n 5
branches COMMIT_EDITMSG config description FETCH_HEAD
HEAD hooks index info logs
modules objects ORIG_HEAD packed-refs refs

xwd

Take screenshot of graphical desktop from TTY. (require sudo priviledges)

This can be useful for capturing the login screen.

The following only works for LightDM.

I’ve refined Neroshan’s command on Ask Ubuntu into a shell script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#!/bin/sh
# screenshot.sh
# USAGE: ./screenshot.sh [file-name]

chvt 7 # On Xubuntu 18.04
#chvt 1# On Ubuntu 18.04
DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 xwd -root -out ~/screenshot.xwd
convert ~/screenshot.xwd $1
rm ~/screenshot.xwd
chvt `tty | sed 's:/dev/tty::'`

This script requires one single argument: output file name (with extension name), which is passed to $1. The idea is simple.

  1. Switch to GUI from CLI (TTY1–TTY6 on Xubuntu 18.04; TTY2–TTY7 on Ubuntu 18.04)
  2. Add necessary shell variables. (Adapt it to GDM or other display manager)
  3. Create a temporary XWD file.
  4. Convert this file to a file with your specified file name.
  5. Remove the temporary XWD file.
  6. Switch back to CLI.
Xubuntu 18.04 error after login

Screenshot by xwd

Taken with the above script from TTY on Xubuntu 18.04

zip

Compress files, which are to be extracted using unzip.

Basic usage:

zip [options] [input_ZIP_archive] [file_list]
  • default action is to add/update files in [file_list], say zip test.zip foo bar creates test.zip from files foo and bar. If [file_list] contains a folder, it won’t step inside it, unless -r is given.
  • -r: recursive add/update.
  • -x: exclude file(s)/folder(s).
  • -u: add new files/update older files in archive.
  • -f: like -u, but it doesn’t add new files.
  • -FS: filesync = update archived file in case of any change in date and/or size, and delete archived file if it doesn’t match [file_list].
  • -m: move original file in [file_list] to [input ZIP_archive].
  • -sf: list archived files
  • -t [start_date] --tt [end_date] date filtering. Both [start_date] and [end_date] are included. Each date is in format either mmddyyyy or yyyy-mm-dd.
    • -t [start_date]: exclude before [start_date].
    • -tt [end_date]: include before [end_date].
  • -0: no compression
  • -[1-9]: specify compression level from fastest to slowest. The slower the better.
  • Z [cm]: specify compression method. Possible choices for [cm] are
    • store: same as -0.
    • deflate: default option. same as any option from -1 to -9.
    • bzip2: bzip2 method will be used.
  • -P [pwd]: (use with care because some OS might allow others, say system admin to look at command history) encrypt ZIP archive with [pwd].
  • -v: verbose/version info
  • -h: help
  • -h2 / -hh: more detailed help

(Last modified on September 10, 2024)