view and create

There are a myriad of ways to create files the most common of which are manual editing (with vi, vim, nano, etc) and standard output redirection...

# echo "Hello World!" >> /tmp/hello.txt

The echo command is a bit of a relic today and the use of the far more flexible printf is encouraged:

# printf "Integer: %d t float: %3.2f t string: %10s n" 5 123.45 "long string"
Integer: 5 float: 123.45 string: long string

 

The cat command is pretty straight-forward and it can be used not only to view files, but also to concatenate them:

# cat file1 file2 file3 >> file123

There are a few useful tricks we can use with cat:

# cat /tmp/test1
line1
line2
.
.
.
line3
.
# cat -s /tmp/test1   -> the -s flag suppresses duplicate empty lines
line1
line2
.
line3
.
cat -vET test2       -> shows non-printable characters (-v), tabs (-T) and end-of-line (-E) with the $ sign
line1$
line2$
^I$
$
M-BM-6$
line3$
.
cat -A test2         -> the -A flag does the same as -vET
line1$
line2$
^I$
$
M-BM-6$
line3$
.
# cat -n test3        -> enumerate all lines
1 line1
2 line2
3
4
5 ¶
6 line3
.
# cat -b test3        -> enumerate non-blank lines
1 line1
2 line2
3 ¶
4 line3
.

If we need to reverse the order of lines in a file we can do that provided we have a sorting key:

root:/tmp> cat normal-order.txt | sort
1 aaa
2 bbb
3 ccc
4 ddd
.
root:/tmp> cat normal-order.txt | sort -r
4 ddd
3 ccc
2 bbb
1 aaa

 

But what can we do if there is no sorting column? Then we can go the script route... or use tac:

root:/tmp> tac normal-order.txt | tee reverse-order.txt
4 ddd
3 ccc
2 bbb
1 aaa

The tac command does exactly the same as cat but in reverse order, starting from the last line of a file and working its way up to the 1st.

 

The more command is also used a lot...

marc:~> more .bashrc
# .bashrc
.
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
.
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
.
# User specific aliases and functions
source ~/.bash_profile

The most commonly used keys when more-ing a file are:

- RETURN: moves forward one line
- SPACE:   moves forward one screen
- b:             moves backward one screen
- q:             quit

But there are a few more than can be pretty handy:

v            startup editor in current line (usually vi but can be changed with EDITOR variable)
=            show current line
:f            show current file name and line number
:n           move on to next file
:p           move back to previous file
/pattern  searches the given regexp
n            finds the next regexp match
!cmd      execute command in subshell
.             repeat previous command

And we can include the pattern search and line number when invoking more:

marc:~> more +35 .viminfo            -> jump straight to line 35
marc:~> more +/AUTH_KEY .viminfo     -> jump to first occurrence of AUTH_KEY

 

The less command is a bit more convenient than more as it offers more navigation and search options and does not need to load the whole target file in order to view it. That is especially useful for very large files!

The exact same navigational options that can be used with more (see above) can also be used with less. Additionally though we can also use:

- UP-ARROW: move up one line
- DOWN-ARROW: move down one line
- LEFT-ARROW: move up half a screen
- RIGHT-ARROW: move down half a screen
- g: go to 1st line
G: go to last line

There are a few more navigational options available (i.e. bracket & parenthesis matching and screen resizing) but we'll skip them for now as they're rarely used.

The same pattern match searches that work with more also work with less, but we have a few more tricks available to us:

/!pattern   will find lines NOT matching the given pattern

/*pattern   will find lines matching the pattern in the current file and starting in the current position and, if none are found, it will continue with the rest of target files

/@pattern   will find lines matching the pattern starting in the 1st line of the 1st file

?pattern   will find lines matching the pattern upwards or backwards

&pattern   will find lines matching the pattern and hide all the rest from view

n   find the next match

N   find the previous match

 

root:/var/log> head dnf.librepo.log
22:16:19 Current date: 2016-12-27T22:16:19+0100
22:16:19 lr_download: Target: file:///etc/dnf/dnf.conf (-)
22:16:19 select_next_target: Selecting mirror for: file:///etc/dnf/dnf.conf
22:16:19 prepare_next_transfer: URL: file:///etc/dnf/dnf.conf
22:16:19 lr_download: Downloading started
22:16:19 check_transfer_statuses: Transfer finished: file:///etc/dnf/dnf.conf (Effective url: file:///etc/dnf/dnf.conf)
22:16:19 lr_download: Target: file:///etc/yum.repos.d/fedora-cisco-openh264.repo (-)
22:16:19 select_next_target: Selecting mirror for: file:///etc/yum.repos.d/fedora-cisco-openh264.repo
22:16:19 prepare_next_transfer: URL: file:///etc/yum.repos.d/fedora-cisco-openh264.repo
22:16:19 lr_download: Downloading started

Without any arguments, head will show the first 10 lines of a given file. But we can show the number of lines we want or limit the output by bytes rather than lines:

root:/var/log> head -n 5 dnf.log                    -> show only the 1st 5 lines
Dec 27 22:16:19 INFO --- logging initialized ---
Dec 27 22:16:19 DDEBUG timer: config: 3 ms
Dec 27 22:16:19 DEBUG cachedir: /var/cache/dnf
Dec 27 22:16:19 DEBUG Loaded plugins: copr, protected_packages, Query, debuginfo-install, needs-restarting, download, playground, config-manager, builddep, noroot, reposync, generate_completion_cache
Dec 27 22:16:19 DEBUG DNF version: 1.1.10
.
root:/var/log> head -5 dnf.log                    -> same as previous command
Dec 27 22:16:19 INFO --- logging initialized ---
Dec 27 22:16:19 DDEBUG timer: config: 3 ms
Dec 27 22:16:19 DEBUG cachedir: /var/cache/dnf
Dec 27 22:16:19 DEBUG Loaded plugins: copr, protected_packages, Query, debuginfo-install, needs-restarting, download, playground, config-manager, builddep, noroot, reposync, generate_completion_cache
Dec 27 22:16:19 DEBUG DNF version: 1.1.10
.
root:/var/log> head -c 99 dnf.log                  -> show only the 1st 99 characters
Dec 27 22:16:19 INFO --- logging initialized ---
Dec 27 22:16:19 DDEBUG timer: config: 3 ms
.
root:/var/log> head -5 dnf.log dnf.librepo.log     -> show first 5 lines of multiple files
==> dnf.log <==
Dec 27 22:16:19 INFO --- logging initialized ---
Dec 27 22:16:19 DDEBUG timer: config: 3 ms
Dec 27 22:16:19 DEBUG cachedir: /var/cache/dnf
Dec 27 22:16:19 DEBUG Loaded plugins: copr, protected_packages, Query, debuginfo-install, needs-restarting, download, playground, config-manager, builddep, noroot, reposync, generate_completion_cache
Dec 27 22:16:19 DEBUG DNF version: 1.1.10
.
==> dnf.librepo.log <==
22:16:19 Librepo version: 1.7.18 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.51.0 NSS/3.27 zlib/1.2.8 libidn2/0.11 libpsl/0.14.0 (+libidn2/0.10) libssh2/1.8.0 nghttp2/1.13.0)
22:16:19 Current date: 2016-12-27T22:16:19+0100
22:16:19 lr_download: Target: file:///etc/dnf/dnf.conf (-)
22:16:19 select_next_target: Selecting mirror for: file:///etc/dnf/dnf.conf
22:16:19 prepare_next_transfer: URL: file:///etc/dnf/dnf.conf
.
root:/var/log> head -5 -q dnf.log dnf.librepo.log      -> "-q" removes the file headers
Dec 27 22:16:19 INFO --- logging initialized ---
Dec 27 22:16:19 DDEBUG timer: config: 3 ms
Dec 27 22:16:19 DEBUG cachedir: /var/cache/dnf
Dec 27 22:16:19 DEBUG Loaded plugins: copr, protected_packages, Query, debuginfo-install, needs-restarting, download, playground, config-manager, builddep, noroot, reposync, generate_completion_cache
Dec 27 22:16:19 DEBUG DNF version: 1.1.10
22:16:19 Librepo version: 1.7.18 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.51.0 NSS/3.27 zlib/1.2.8 libidn2/0.11 libpsl/0.14.0 (+libidn2/0.10) libssh2/1.8.0 nghttp2/1.13.0)
22:16:19 Current date: 2016-12-27T22:16:19+0100
22:16:19 lr_download: Target: file:///etc/dnf/dnf.conf (-)
22:16:19 select_next_target: Selecting mirror for: file:///etc/dnf/dnf.conf
22:16:19 prepare_next_transfer: URL: file:///etc/dnf/dnf.conf
.
root:/var/log> tail -n 3 dnf.log            -> show last 3 lines
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.
.
root:/var/log> tail -3 dnf.log             -> same as above
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.
.
root:/var/log> tail -c 157 dnf.log          -> show last 157 characters
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.

We can tail the last n lines of a file and keep checking every 1 second for any changes with the -f flag ("f" for "follow"):

root:/var/log> tail -3f dnf.log                            -> show last 3 lines and any added afterwards
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.
.
root:/var/log> tail -n 3 -f --sleep-interval=2 dnf.log    -> change interval check to 2 secs
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.
.

If the file we are tailing is renamed, as the file name descriptor should remain the same, the tailing should go on indefinitely. If we do not want that...

root:/var/log> tail -n 3 --follow=name --sleep-interval=2 dnf.log   -> follow the filename and not the descriptor
Dec 30 13:23:11 DEBUG Making cache files for all metadata files.
Dec 30 13:23:11 INFO Metadata cache refreshed recently.
Dec 30 13:23:11 DDEBUG Cleaning up.
.

We can set the tail command to NOT fail even if the file cannot be read with the "--retry" option. This is useful when the file to tail has not been yet created or is not available for some interim reason. Furthermore, we can set the tail to end when some process is terminated with the "--pid" option.

root:/var/log> tail -n 3 --follow=name --sleep-interval=2 --retry --pid=28421 dnf.log

 

Another file creation command is touch. This command is most often used to create empty regular files or to update its access & modification times. But it can do a few more things:

root:/tmp> touch /tmp/test123
root:/tmp> ls -­l /tmp/test123
-rw-­r--­­r--­­. 1 root root 0 Oct 9 11:34 /tmp/test123
.
root:/tmp> touch ­-a /tmp/test123     → change only access time
root:/tmp> ls -­l /tmp/test123
-rw--­r--­­r--­­. 1 root root 0 Oct 9 11:34 /tmp/test123
.
root:/tmp> touch ­-m /tmp/test123     → change only modification time
root:/tmp> ls -­l /tmp/test123
-rw-­r--­­r--­­. 1 root root 0 Oct 9 11:37 /tmp/test123
.
root:/tmp> touch ­-m ­­--date="2004­02­27 14:19:13" /tmp/test123 → set explicit mtime
root:/tmp> ls ­-l /tmp/test123
-rw-­r--­­r--­­. 1 root root 0 Feb 27 2004 /tmp/test123
.
root:/tmp> touch ­-a ­­--date="2004­02­27 14:19:13" /tmp/test123 → set explicit atime
root:/tmp> ls -­l /tmp/test123
-rw-­r--­­r--­­. 1 root root 0 Feb 27 2004 /tmp/test123
.
root:/tmp> touch ­­--date="2004­02­27 14:19:13" /tmp/test123 → set explicit atime & mtime
root:/tmp> ls -­l /tmp/test123
-rw-­r--­­r--­­. 1 root root 0 Feb 27 2004 /tmp/test123
.
root:/tmp> rm ­-f /tmp/test456
root:/tmp> touch ­-c /tmp/test456     → change atime & mtime if file exists or exit
root:/tmp> ls ­-l /tmp/test456
ls: cannot access /tmp/test456: No such file or directory

 

As is the case with touch, we can create empty files with the utility xfs_mkfile. But we can also give those new files any size in bytes, mbytes, gbytes or OS blocks.

# xfs_mkfile /tmp/testfile      → let's create a new file
# ls -l /tmp/testfile           → didn't work because we did't specify its size!
ls: cannot access '/tmp/testfile': No such file or directory
.
# xfs_mkfile 0b testfile0      → let's do it again with size 0 blocks
xfs_mkfile 512 testfile1     → 512 bytes (minimum)
xfs_mkfile 2b testfile2      →  2 OS blocks (4096 bytes) 
xfs_mkfile 2m testfile3      → 2 mbytes
xfs_mkfile 1g testfile4      → 1 gigabyte
# ls -l testfile*
-rw-------. 1 root root          0 Apr 9 20:00 testfile0
-rw-------. 1 root root        512 Apr 9 20:09 testfile1
-rw-------. 1 root root       8192 Apr 9 20:09 testfile2
-rw-------. 1 root root    2097152 Apr 9 20:09 testfile3
-rw-------. 1 root root 1073741824 Apr 9 20:10 testfile4

The default behaviour of xfs_mkfile is to zero-out the whole file and that might take some time for large files. If we want to save time and do not care about initialising all the blocks, we can use the "-n" flag:

xfs_mkfile -n -256m testfile4

With the "-n" we are skipping all the uninitialised blocks and just write one block at the end of the file.

 

The fallocate binary can also be used to create new files but it is normally used to preallocate extra space for existing files:

root:/tmp> fallocate -l 1m testfile1           → create a new file of 1 Mbyte
root:/tmp> ls -l testfile1
1024 -rw-r--r--. 1 root root 1048576 Apr 9 20:52 testfile1
root:/tmp> fallocate -l 2m testfile1           → extend the file size to 2 Mbytes
root:/tmp> ls -l testfile1
2048 -rw-r--r--. 1 root root 2097152 Apr 9 20:53 testfile1
root:/tmp> fallocate -o 2m -l 1m testfile1     → add 1 Mbyte more at the end of the file
root:/tmp> ls -l testfile1
3072 -rw-r--r--. 1 root root 3145728 Apr 9 20:54 testfile1

That's not it though as we can also shrink it by returning to the file system unused blocks:

root:/tmp> fallocate -c -o 2m -l 512k testfile1
root:/tmp> ls -l testfile1
2560 -rw-r--r--. 1 root root 2621440 Apr 9 21:03 testfile1

The "-c" stands for compress and in the example above we deallocated 512 Kbytes of unused space starting at offset 2 Mbytes. Blocks past the offset+length are appended to the offset point so that there are no holes in the file.

We can also return unused blocks to the file system by creating a sparse file without modifying its apparent size:

root:/tmp> fallocate -p -o 2m -l 256k testfile1
root:/tmp> ls -l testfile1
2304 -rw-r--r--. 1 root root 2621440 Apr 10 06:11 testfile1

Using the "-p" flag (stands for "punch-hole") we just deallocated 256kb starting at offset 2mb without changing the file size. See that the number of blocks came down from 2560 to 2304 (real storage) while the file size remained the same. Be aware that in file systems with a considerable number of sparse files the statistics you will get with df might be totally misleading...

We can reverse what we did with the "-p" flag by using "-z" to zero-out sparse blocks:

root:/tmp> fallocate -z -o 2m -l 256k testfile1
root:/tmp> ls -ls testfile1
2560 -rw-r--r--. 1 root root 2621440 Apr 10 06:39 testfile1

 

The truncate is generally used to deallocate unused file blocks. We can deallocate all unused blocks or preserve some of them to avoid the reallocation overhead later on. Not only that, we can use truncate to allocate extra space and even create new files!

root:/tmp> ls -ls testfile1
2560 -rw-r--r--. 1 root root 2621440 Apr 10 06:39 testfile1
root:/tmp> truncate -s 2m testfile1                  → truncate to 2Mb
root:/tmp> ls -ls testfile1
2048 -rw-r--r--. 1 root root 2097152 Apr 10 07:00 testfile1
.
root:/tmp> truncate -s 250 -o testfile1                truncate to 1Mb (4Kb x 250)
root:/tmp> ls -ls testfile1
1000 -rw-r--r--. 1 root root 1024000 Apr 10 07:01 testfile1
.
root:/var/tmp> truncate -s 200 -o testfile1          → truncate to 800Kb (4kb x 200)
root:/var/tmp> ls -ls testfile1
800 -rw-r--r--. 1 root root 819200 Apr 10 07:13 testfile1
.
root:/var/tmp> truncate -s +1m testfile1             → allocate an extra 1Mb of blocks
root:/var/tmp> ls -ls testfile1
800 -rw-r--r--. 1 root root 1867776 Apr 10 07:20 testfile1
.
root:/var/tmp> truncate -s -512k testfile1           → deallocate 512Kb of blocks
root:/var/tmp> ls -ls testfile1
800 -rw-r--r--. 1 root root 1343488 Apr 10 07:20 testfile1

We can create new files and reference others to determine the size:

root:/var/tmp> truncate -s 1m testfile5           →  create if it isn't there
root:/var/tmp> truncate -c -s 1m testfile6        →  do not create it if it's not there already
root:/var/tmp> ls -ls testfile[56]
. 0 -rw-r--r--. 1 root root 1048576 Apr 10 07:35 testfile5
.
root:/var/tmp> truncate -r testfile1 testfile2 testfile3
root:/var/tmp> ls -ls testfile*
800 -rw-r--r--. 1 root root 1343488 Apr 10 07:20 testfile1
. 0 -rw-r--r--. 1 root root 1343488 Apr 10 07:27 testfile2
. 0 -rw-r--r--. 1 root root 1343488 Apr 10 07:27 testfile3

 

The dd command is normally used to copy files, partitions or whole disks block- by-block. Let's see some use cases to get an idea of how powerful it can be:

root:/var/tmp> dd if=testfile1 of=testfile2
2624+0 records in
2624+0 records out
1343488 bytes (1.3 MB, 1.3 MiB) copied, 0.00370251 s, 363 MB/s

We just copied the input file onto an output file the same way we could do with cp. Next we'll perform some operations that cp cannot handle:

# dd if=/dev/sdb5 of=/dev/sdc5                →  copy partition to partition
dd if=/dev/sdb5 of=sdb5.img                 →  copy partition to image file 
# dd if=/dev/sdb5 | bzip2 sdb5.img.bz2        →  pipe partition contents to bzip2
# dd if=/dev/zero of=/dev/sdb6                →  zero-out partition

For operations involving large files or devices, we might want to tweak the I/O parameters: count, bs, ibs obs.

root:/var/log> dd if=/dev/zero of=journal.log count=1000000000 bs=1
1000000000+0 records in
1000000000+0 records out
1000000000 bytes (1.0 GB, 954 MiB) copied, 986.408 s, 1.0 MB/s
.
root:/var/log> dd if=/dev/zero of=journal.log count=1953125 bs=512
1953125+0 records in
1953125+0 records out
1000000000 bytes (1.0 GB, 954 MiB) copied, 2.395 s, 418 MB/s
.
root:/var/log> dd if=/dev/zero of=journal.log count=244140 bs=4096
244140+0 records in
244140+0 records out
999997440 bytes (1.0 GB, 954 MiB) copied, 0.857383 s, 1.2 GB/s

We can see above that as we increase the I/O read/write chunks and reduce the number of ops, the I/O speed increases dramatically. Which makes perfect sense! Important to remember that when the input comes from /dev/zero, /dev/random or /dev/urandom and the output goes to a normal file, we should always specify the count and bs to avoid filling the file system.

In cases where the input and output files/devices have different storage characteristics we might want to replace bs by ibs and obs to specify different I/O block sizes for input/output:

root:/var/log> dd if=/dev/zero of=journal.log count=244140 ibs=1k obs=4k
244140+0 records in
61035+0 records out
249999360 bytes (250 MB, 238 MiB) copied, 0.243174 s, 1.0 GB/s

When we specify different I/O chunks for if and of, the ibs is the one that determines the total read/write sizes as shown above.

Sometimes we will want to copy files/devices partially and we can do that with the options seekskip:

# dd if=/dev/zero of=application.log bs=1 count=0 seek=1G
0+0 records in
0+0 records out
0 bytes copied, 0.000165126 s, 0.0 kB/s

We created a 1Gb file without writing any data on it (count=0) by going to the 1Gb offset and marking it as EOF. Next we will a copy partition onto an image file skipping the partition's 1st Mbyte:

root:/var/log> dd if=/dev/sda1 of=sda1.img bs=1M skip=1
475+0 records in
475+0 records out
498073600 bytes (498 MB, 475 MiB) copied, 4.7384 s, 105 MB/s

As seen above the skip option relates to the if whereas seek applies to of, so nothing stops us from using both at the same time.

There are a bunch of useful tweaks we can do to the normal behaviour of dd:

root:~> dd if=/dev/zero of=perftest bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.442818 s, 2.4 GB/s
.
root:~> dd if=/dev/zero of=perftest bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.1622 s, 96.2 MB/s
.
root:~> dd if=/dev/zero of=perftest bs=1M count=1024 conv=fdatasync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.4843 s, 86.0 MB/s
.
root:~> dd if=/dev/zero of=perftest bs=1M count=1024 conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.7918 s, 91.1 MB/s

The 1st performance run above just wrote 1G of zeros onto a file. But the statistics were not representative of the real I/O performance at the storage layer because of caching and batched writes. So in the 2nd run we forced the data to be physically written to disk and saw an immediate performance hit. In the 3rd run we forced direct writes to disk skipping the file system cache altogether. And in the 4th run we also asked the metadata to be synchronised (fdatasync forces data syncs whereas fsync does the same for data+metadata). The 3rd and 4th run timings would be the realistic ones.

There are a few more options (nocreat, notrunc, noerror, append, nonblock, noatime, nocache, ...) that might come in handy in certain circumstances.

 

Another very useful and used command is ln which is used to create hard & soft links between files and directories. Just as a reminder, a hard link is a pointer to an inode in disk with the actual data. Every file has at least one hard link or it becomes an orphan inode (placed in the lost+found directory). By creating an extra hard link we are creating another pointer to the file's inode an incrementing the link count by 1.

root:/tmp> ls ­-l test.1
-rw­-r--­­r--­­. 1 marc marc 7 Oct 8 10:59 test.1
.
root:/tmp> ln test.1 tmp1/test.2
root:/tmp> ls ­-l test.1
-rw­-r­­--r--­­. 2 marc marc 7 Oct 8 10:59 test.1
.
root:/tmp> ln test.1 tmp2/test.3
root:/tmp> ls ­-l test.1
-rw-­r--­­r--­­. 3 marc marc 7 Oct 8 10:59 test.1

Logically, we will only delete a file for good when we delete all the hard links to it!

A symbolic link on the other hand is nothing more than a pointer to the original pointer to the inode. So by deleting the original hard link we will be deleting the actual file and breaking the symbolic links. The main advantages of symbolic links is that they can span different filesystems and they can point to hard links that do not yet exist.

To create a symbolic link we have to add the “-s” flag:

root:/tmp/tmp1> ls ­-l
total 568
-rw­-r­­--r--­­. 2 root root     55 Oct 9 13:02 2resolv.txt
-rwxr­-xr-­x. 2 root root    624 Oct 9 13:02 bin.lst
-rwxr­-xr-­x. 2 root root    288 Oct 9 13:02 monitoring.lst
-rw­-r­­--r­--­. 2 root root 319721 Oct 9 13:02 ssh.pdf
-rw-­r--­­r--­­. 2 root root  25007 Oct 9 13:02 strace.log
.
root:/tmp/tmp1> cd ../tmp2
root:/tmp/tmp2> ls -­l
total 0
.
root:/tmp/tmp2> ln ­-s ../tmp1/* .
root:/tmp/tmp2> ls -­l
total 0
lrwxrwxrwx. 1 root root 19 Oct 9 13:07 2resolv.txt ­> ../tmp1/2resolv.txt
lrwxrwxrwx. 1 root root 15 Oct 9 13:07 bin.lst ­> ../tmp1/bin.lst
lrwxrwxrwx. 1 root root 22 Oct 9 13:07 monitoring.lst ­> ../tmp1/monitoring.lst
lrwxrwxrwx. 1 root root 15 Oct 9 13:07 ssh.pdf ­> ../tmp1/ssh.pdf
lrwxrwxrwx. 1 root root 18 Oct 9 13:07 strace.log ­> ../tmp1/strace.log

The last 3 file creation commands are used far less often but can come in very handy in specific situations.

The first one of these is mkfifo which is used to create FIFO pipes for inter-process communication. To see how it works let's show a simple example:

# create pipe with explicit permissions
root:/home/marc> mkfifo -­m 0700 /var/tmp/fifo1
.
# check file type and permissions
root:/home/marc> ls ­-l /var/tmp/fifo1
prwx------­­­­­­. 1 root root 0 Oct 9 10:35 /var/tmp/fifo1
.
# from another process we send input to the pipe
root:/home/marc> echo "Input from another process sent through FIFO pipe!" >> /var/tmp/fifo1
.
# and we can read it from the other side of it
root:/home/marc> tail -­f /var/tmp/fifo1
Input from another process sent through FIFO pipe!

We can have one or more processes using the pipe as their standard output and one reading from it. We can also have 2 processes communicating with each other reading and writing to it. As its name implies a pipe is nothing more than a file that 2 or more processes can use as stdin/stdout to communicate in a synchronous manner.

We can also use the command mknod to create FIFO pipes...

root:/tmp> mknod ­-m 0700 /var/tmp/fifo2 p
root:/tmp> ls -­l /var/tmp/fifo*
prwx------­­­­­­. 1 root root 0 Oct 9 10:36 /var/tmp/fifo1
prwx­­­­­------­. 1 root root 0 Oct 9 10:51 /var/tmp/fifo2

... but also block and character devices. In normal circumstances we would not need to manually create block/character devices but sometimes (i.e. with Oracle RAC we might want to create device aliases with certain names linked to existing devices) there is no option.

# create character device /dev/ora_db1_raw1 pointing to the existing device
# with major number 43 and minor number 202
root:/tmp> mknod /dev/ora_db1_raw1 c 43 202
.
# create block device /dev/xvda pointing to device 67:109
root:/tmp> mknod /dev/xvda b 67 109

At times we do need a temporary file or directory whose name is irrelevant and that's where mktemp comes into the picture.

root:/tmp> mktemp
/tmp/tmp.O7sV4FI8Ms

Without any arguments the mktemp command creates a temporary file in the $TMPDIR directory (if set) or /tmp , with u+rw permissions minus umask restrictions and a name like tmp.XXXXXXXXXX that is shown upon creation. We can change the naming pattern, the destination directory and whether it is a file or a directory that is created:

root:/tmp> export TMPDIR=/var/tmp
root:/tmp> mktemp
/var/tmp/tmp.dIKZQCHGiV
.
# create temp file with the pattern app9.XXXXXXXXXX.tmp in /var/run.

root:/tmp> mktemp ­-p /var/run app9.XXXXXXXXXX.tmp
/var/run/app9.fPzdUpogZR.tmp
.
# create temp directory with the pattern app9.XXXXXXXXXX.tmp in /var/run
root:/tmp> mktemp ­-d -­p /var/run app9.XXXXXXXXXX.dir
/var/run/app9.zIp6xkoRxH.dir

Finally, to create directories we use mkdir:

root:/tmp> mkdir logs                 -> create directory logs with default mask in current directory  
root:/tmp> mkdir /tmp/archive         -> create directory archive given full path
root:/tmp> mkdir -p /var/opt/oracle  -> create full path in one go if required 
root:/tmp> mkdir -m 700 old-logs     -> create directory with 700 permissions

 

<< search commands          delete commands >>