Tuesday, November 08, 2005

Bash - Choking the system - Fork bomb

This is a wicked post. If you execute the script given in this, then you would need to restart the machine. The command:
:(){ :|:& };:
What does this mean:
:()
{
 : | : &
}
:
It creates a function called : and then calls the same recursively in the background. This would cause the system to choke due to continues creation of processes. This is called as a fork-bomb. Fork-bombs can be created in many languages/ways. Check this Wikipedia page.

Thursday, October 06, 2005

C - C calling convention

When we call a function in C, some details are sent into the stack. These include the details like the address of the location in the calling function where the control must return after a call to the function and also the details pertaining to the arguments.
But there is one question remaining: What is the order in which the arguments are passed? Meaning: Are they passed from the leftmost to rightmost OR rightmost to leftmost?
The answer to this is Right to left (rightmost argument is pushed first and it continues until the leftmost argument gets pushed). In case of Pascal, it was the other way round(Left to right). Why is this difference? Is there any advantage/disadvantage of these approaches?
Well the advantage of the Left to right (Pascal) approach is that it is faster. The advantage of the Right to Left (C) approach, is that we can implement functions with variable number of arguments only in this approach.
Let us say that we have a program with a printf call as shown below:
#include 

void main(void)
{
    int i=20;

    printf("Hello %d people", i);
}
In this case the call to printf will cause the the second argument (i) to be pushed into the stack first and then the first argument (Hello %d people) to be pushed.
When popping out the arguments, the first argument is popped out first. On scanning the first argument, printf knows how many more arguments it must look for. Thus this helps in (though is not the only criteria for) the implementation of variable number of arguments being supported for a function.

Tuesday, August 16, 2005

Bash - Careful while using $?

The exit status of a command can be obtained using the shell variable $?. In other words, $? contains the exit status of the preceding command. But there are some cases when using this carelessly can cause errors.
One such scenario in which using this can be a problem is when there are multiple exit values possible. For example:
#!/bin/bash

/bin/ls * > /home/karthick/lsfile 2>&1
if [ $? -eq 0 ]
then
 echo "Hello"
elif [ $? -eq 1 ]
then
 echo "Hi"
else
 echo "Bye"
fi
This must work in most cases. But consider this:
$ ls *
bash: /bin/ls: Argument list too long
$ echo $?
126
In this case the output of the script is expected to be Bye. But it gives the output Hi. Not only in this case, in all cases in which the exit status of ls is not 0, it will output Hi.
This is because, the condition $? -eq 0 is considered to be a command and hence the condition in elif checks the exit status of $? -eq 0, which is 1 (Failure - Remember the exit status of ls is 126). Hence the output of this script would be 126.
In cases where we need to multiple value checks, there are two ways of doing it safely.
  1. Use case...esac.
  2. Store the exit status of the command in a variable and use the variable in the conditions.

Wednesday, June 01, 2005

Bash - Using Input Field Separators

Consider the following snippet:
#!/bin/bash

for i in `/bin/cat /etc/passwd`
do
 if echo $i | grep $1 
 then
  echo "Found $1"
  exit 0
 fi
done
echo "Not found $1"
exit 1
This might not work in all cases. Consider this user:
ftp:*:14:50:Only ftp user:/var/ftp:/sbin/nologin
When we search for the user ftp, the output is:
$ sh cond.sh ftp
ftp:*:14:50:Only
Found ftp
This is not the expected output. To get over this issue, we can use the shell variable IFS to set the field separator for the input provided as some other character. The solution would be:
#!/bin/bash

IFS="^M" # To see how this must be done, check here
for i in `/bin/cat /etc/passwd`
do
 if echo $i | grep $1
 then
  echo "Found $1"
  exit 0
 fi
done
echo "Not found $1"
exit 1
The output would now be:
$ sh cond.sh
ftp ftp:*:14:50:Only ftp user:/var/ftp:/sbin/nologin
Found ftp
Note: To see how the if condition in line number 5 (in the first snippet) and the line number 6 (in the second snippet) works, check here.

Sunday, April 10, 2005

Bash - Reading line-by-line in a loop

This is something that many people would know. Let us take an example:
$ cat ~/file
If you take every crisis as an opportunity
Life will not only be successful,
But will also be satisfying.
Let us assume that you want to display all lines which have "will" in them. One way to do this would be by reading the file line by line and looking for "will". To do this use the following loop:
#!/bin/bash

while read line
do
 echo $line | grep "will"
done<~/file
The same cannot be directly done with for loops: (Thanks to Mark Clarkson for pointing it out).
#!/bin/bash

for line in `cat ~/file`
do
 echo $line | grep "will"
done

However, this can be done with for loops by combining it with an IFS change:
#!/bin/bash

OLD_IFS=$IFS
IFS='\r';

for line in `cat ~/file`
do
 echo $line | grep "will"
done

IFS=$OLD_IFS
In the third snippet, we can have any valid command in the place of cat ~/file. Thus, this can be a very flexible construct.

It is important to note that in the first snippet, read is a built-in and hence does not spawn a new process. However in the third snippet, the cat command causes an additional process to be spawned.

Another way to do this is by
#!/bin/bash

exec 3<file.txt # Open file.txt as FD#3

while read line  
do
 echo $line | grep "will"
done <&3

exec 3>&- # Close fd #3
In the fourth snippet, we use a functionality of the exec command which opens a file and uses the number (3 in this case) as the file descriptor. This file descriptor is used at the end of the while. It is important to add the & in front of the file descriptor.

All the cases will work generally. There is one case in which it will not work: when a value is read inside the loop.
#!/bin/bash

while read line
do
 echo $line
 read -p "Enter a value: " value
done<~/file
The fifth snippet would read alternate lines into the variable value. You can use a construct similar to snippet 3 for this purpose:
#!/bin/bash

exec 3<file.txt # Open file.txt as FD#3

while read -u 3 line  
do
 echo $line
 read -p "Enter a value: " -u 0 value
done

exec 3>&- # Close fd #3
In the sixth snippet, we use the -u option which is for specifying the file descriptor that needs to be used for the corresponding read. We specify that the first read is for reading from descriptor 3, while the second read is for reading from descriptor 0. Note that the redirection at the end of the loop is not necessary.

Thursday, March 31, 2005

Bash - Using statements in conditions

To test the success or failure of a command, the general method of scripting is as follows:
#!/bin/bash

some_command
if [ $? -eq 0 ]
then
    # Command executed successfully
else
    # Command execution failed
fi
A better method to do the same is as follows:
#!/bin/bash
if some_command
then
    # Command executed successfully
else
    # Command executed failed
fi
Please note the change in syntax for if between the two statements.
This can be very helpful especially if there is a chance that some line gets added between the command and the if statement. In the former case, we will effectively be checking for the return value of the new statement that has been added. In the latter case, such problems do not come.

Thursday, March 17, 2005

Bash - How to append stdout and stderr to the same file?

What happens when we use >> for redirection. This operator opens the files in append mode and hence there are no issues due to overwriting values. But try this:
#!/bin/bash

exec >>foo 2>>&1
echo hello >&1
echo boss >&2
Try making it this way:
#!/bin/bash

exec >>foo 2>>1
echo hello >&1
echo boss >&2
Now you see that the file 1 is created with the text boss. So, the bottom line is >> cannot be used with &.

Bash - How to redirect stdout and stderr to the same file?

There is a difference between giving:
$ command >file1 2>file1
and
$ command > file1 2>&1
In the former case, the two redirections maintain two different file handles and hence the data might not be what we expect it to be.
Try the following in a script:
#!/bin/bash

exec >foo1 2>foo1
echo hello >&1
echo boss >&2
In the latter case however, the two redirections share the same handle. Hence this is preferred for most purposes.
Try the following in a script:
#!/bin/bash

exec >foo2 2>&1
echo hello >&1
echo boss >&2
Now try cat of foo1 and foo2 to see the differences.
I will discuss how this happens in a different post.

Friday, January 28, 2005

Basics of SED

Some points about sed that we need to know on day 1 of learning sed:
1) Sed is a stream editor and hence takes input from a file or group of files.
2) By default, the output of sed is displayed in stdout. If we want the output to be stored in a file, then we can redirect it to some temporary file using >.
3) Sed scripts can be written and sed commands can also be used in shell scripts.
4) Since sed, like most other Unix utilities, takes input from stdin and gives output to stdout, piping is possible.

I will follow this up with a list of tips on sed usage.

SED - Replace patterns in a file using sed

One of the best tools for editing a stream of characters is sed. In fact sed stands for stream editor. One of my favorite commands in sed is the substitute command. The command syntax is as follows:
$ sed 's/orig/new/' source > dest
The command uses regular expressions and hence orig is a regular expression. The command substitutes all occurences of the original pattern with new.
As can be seen the output has to be redirected to another file. This is because sed sends the output to stdout. To substitute things in the same file, use the following format:
$ sed 's/orig/new/' source > dest && /bin/mv dest source && /bin/rm dest