Today


Tuesday, March 1, 2011

World cup 2011 Schedule


Thursday, February 24, 2011

Staright forward translation of English movie titles.

Die another day= inko roju sachipodaam.
Tomorrow never does: repu enthaki saavadu.
Gold finger: bangaaru velu
Mummy = Amma
Mummy returns= thirigochina Amma
true lies= nijam abaddam aadindi
Terminator: muginchuvaadu
I know what you did last summer: poyina vesavilo nuvvem chesaavo naaku thelsu
Hell Boy: narakapu pilladu.
Fantastic four: adbhuthamina aa naluguru.
 Angels and daemons: devathalu mariyu deyyalu
Evil dead: maa chedda chaavu
Evil dead 2: maa chedda chaavu rendosaari
Evil dead 3 : maa chedda chaavu moodosaari
salt : uppu
Rising bull: piki legusthunna yeddu
Pulp fiction: Gujju gharshana
I am legend: nenu chala goppavaadini.



A Nightmare On Elm Street: ELM veedhilo  peedakala
Wrong turn: Thappu dova
Iron Man: inapa manishi
I know who killed me: nannu sampinodu naaku thelsu
I cant think straight: nenu thinnaga aalochinchalenu.
Men In Black: Cheekatilo magaallu
Tomb rider: samaadhula meeda swari chesedi.
Mission Impossible: Asalu emi cheyyalemu
 G I Joe:The rise of Cobra: G I Joe mariyu piki lesina thachupaamu.
 Gone in 60 sec: nimishamlo poyindi.
Gone woth the wind: Gaalitho paatu poyindi.
paranormal actuivity: asaadhaaranamayina charya.
Hurt locker: Noppini bhandinchevaadu.
Priest: poojaari
vampire kiss: pisacham pettina muddu.

Wednesday, February 9, 2011

20 Days to the Top free download


For people who wants to sell themselves better
This
book is a winner! I've read many sales books offering the same tired
formulas and "power closes" designed to trap unsuspecting consumers into
a deceitful sales web. Refreshingly, Brian Sullivan offers a proven,
duplicatable formula based on learning what the customer really wants,
and giving it to them in an ethical way they find hard to resist. One
problem with most sales books and training is that the student has no
way to easily remember and implement what they've learned, so the
initial enthusiasm quickly wears off and sales people resort to their
old way of doing things. With easy to remember acronyms and PRECISE call
sheets, you'll soon be asking CLEAR questions and using SHARP responses
to customer concerns, and having more fun and making a lot more money
along the way. Buy this book and become a PRECISE selling superstar.


Free Download : RS Link
 
 

India's 50 Most Powerful People 2009





 From BUSINESS WEEK

In India, change is so rapid it surprises even the powerful. Fortunes vanish, markets melt down, and the most die-hard fans find someone else to love, someone else to vote for. Take, for instance, Navin Chawla. With 712 million voters considering their ballot as Indians vote on who will lead their country, one of India's most powerful men is perhaps the Chief Election Commissioner, N. Gopalaswami. India's elections began on Apr. 15 and take place in stages nationwide over several weeks. During that time, Gopalaswami is a bureaucrat with almost unlimited powers to impose order on an unruly process, moderate hate speech, and herd the world's largest democracy through a peaceful transfer of power. But on June 16, the elections will end, and he will vanish back into the labyrinth of the Indian bureaucracy. In modern India, even powerful reigns can be short-lived. In the newest edition of BusinessWeek's list of the 50 most influential Indians, politicians jostle for space with professors, businessmen with cricketers. The attempt is to pinpoint the shifts in power that defined India in the past year, and to predict the players to watch for in the next year.

Linux+ Certification Bible free download



Unleash the power of CompTIA's newest certification! Linux+ is the next hot certification to come from CompTIA, the company behind A+ with a following of 250,000+ certified and growing. Linux+ Certification Bible contains everything you need to know to pass the exam as well as practical information in one comprehensive volume! 



Free Download : Click here

Purging the process Part 1

Introduction to pipes, filters, and redirection, Part 1

Summary
If you've arrived at Unix from the graphical user interface (GUI) world of Windows or Mac OS, you're probably not familiar with pipes and filters. Even among character-based interfaces, only a few of them, such as MS-DOS, provide even rudimentary pipes and redirection.

Redirection allows a user to redirect output that would normally go to the screen and instead send it to a file or another process. Input that normally comes from the keyboard can be redirected to come from a file or another process.

 Purging the process: Read the whole series! 
Part 1. The basics of pipes and redirections
Part 2. Pipes and redirection: More advanced features
When a typical Unix utility starts up, three files are automatically opened for you inside of it. These files are given file descriptor numbers inside the program -- 0, 1, and 2 -- but they're more commonly known as stdin (standard in -- file descriptor: 0), stdout (standard out -- file descriptor: 1) and stderr (standard error -- file descriptor: 2). When the program starts, default assignments for these files are made to /dev/tty, which is the device name for your terminal. The stdin file is assigned to the keyboard of your terminal, while stdout and stderr are assigned to its screen.
Let's start with a simple example using grep. Type a grep command to find lines containing the word hello, then type the following lines at your terminal. At the end of each line press Enter to move down to the next line. Watch what happens as you type say hello.
$ grep "hello"
Now is the time
for every good person to
say hello.
The screen repeats the last line.
$ grep "hello"
Now is the time
for every good person to
say hello.
say hello.
Hold down the Control key and press D to end the input to grep. Control-D is an end-of-file marker and can be entered as a keystroke to stop any utility that is taking its input from the keyboard.
The grep "hello" line is a command to search standard input for lines containing hello and echo any such line found to standard output. The Unix console automatically echoes anything you type, so the three lines appear on the screen as you type them. Then grep hits a line containing hello and decides to output it to standard out, and say hello appears on the screen a second time. The second appearance is the output from grep.
Standard output can be redirected to a file using the right angle bracket (>) as shown in the example below. The same grep command is redirected to send its output to a file named junk.txt. The say hello line doesn't appear a second time because it's been directed to the junk.txt file. After the user presses Control-D, cat is used to display the contents of junk.txt, which contains grep's single output line.
$ grep "hello" >junk.txt
Now is the time
for every good person to
say hello.
(type control-D here)
$ cat junk.txt
say hello.
$
Standard input can be redirected to come from a file by using the left angle bracket (<). In order to demonstrate this, we need a file that can be used for input. Use vi to create the following sample file and save it as hello.txt.
Now is the time
for every good person to
say hello.
When you type the following command, notice that the output from grep is the single say hello. Because input is being drawn from a file, you don't need to use Control-D to stop the process.
$ grep "hello" <hello.txt
say hello.
Both standard input and output are redirected in the following example. Once grep starts up, it takes its input from hello.txt and outputs the result to junk.txt. There is no output on the screen, but you can use cat to display junk.txt and verify the contents.
$ grep "hello" <hello.txt>junk.txt
$ cat junk.txt
say hello.
$
If a redirection to an output file encounters a file that already exists, that file is destroyed and a new one, containing the new output, is created, assuming the user has appropriate permissions to delete and create a new file. You can confirm this by using the previous example to search for a different line of text. In this example, the earlier version of junk.txt has been replaced with the new output from grep, the single line Now is the time
$ grep "Now" <hello.txt >junk.txt
$ cat junk.txt
Now is the time
$
There is a convention used in Unix programs which dictates that, if a file is expected as input to a program but no file is named on the command line, standard input is used. Because grep is designed to search for a string in a file, or files, it uses a command-line syntax that lets you name a file on the command line, and the input redirection symbol is not needed. Internally, grep checks if a file is named on the command line and opens and uses it. If no file name is found, standard input is used. The following command lines for grep have the identical effect.
Internally, the first command reassigns hello.txt to standard input and uses it for input; the second command opens hello.txt as a file and uses it for input. grep doesn't expect an output file to be named on the command line. To get the output into a file, you must use output redirection. It doesn't hurt to redirect grep input, but in the case of grep, the redirection is already taken care of for you on the command line.
$ grep "Now" <hello.txt >junk.txt
$ grep "Now" hello.txt >junk.txt
If you want to preserve the existing output file and append new information to it, use a double right angle bracket (>>). The following example uses echo, which normally outputs to the screen, to create the hello.txt file without using an editor. The output of the echo command is redirected into the file, and two more lines are appended to it.
$ echo "Now is the time" >hello.txt
$ echo "for every good person to" >>hello.txt
$ echo "say hello." >>hello.txt
$ cat hello.txt
Now is the time
for every good person to
say hello.
$
Pipes are created as a means of taking the output of one program and using it as the input to another. The pipe symbol (|) is used as a connector between the two programs. In the following example, look at the first part of the command up to the first pipe symbol. The cat command normally outputs to the screen; in this case, however, the output has been sent into a pipe. On the righthand side of the pipe, this output becomes the input to grep "hello". The output from grep "hello" is in turn sent into another pipe. On the right side of that pipe, the output is used as standard input to a sed command that searches for hello and replaces it with bye. The final result is redirected to a file named result.txt which cat displays on the screen as say bye.
$cat hello.txt | grep "hello" | sed -e "s/hello/bye/" > result.txt
$cat result.txt
say bye.
$
If this were broken down step by step using simple redirection, you would need several commands, as well as the final rm steps to clean up the intermediate work files that were created.
$cat hello.txt >wrk1.txt
$ grep "hello" <wrk1.txt >wrk2.txt
$ sed -e "s/hello/bye/" &ltwrk2.txt >result.txt
$cat result.txt
say bye.
$rm wrk1.txt wrk2.txt
The initial step of getting hello.txt into the grep command could also be done in several other ways. Two examples are shown below. The first redirects input to grep from hello.txt on the lefthand side of the pipe; the second puts parentheses around the grep and sed commands, groups them as a subprocess, then redirects input and output to the grouped process.
$ grep "hello" < hello.txt | sed -e "s/hello/bye/" > result.txt
$( grep "hello" | sed -e "s/hello/bye/" ) < hello.txt > result.txt
$
Redirecting standard error output
So far I've only shown you how to pipe and redirect standard output, but it's frequently useful to do something with error output. In the following example, find is being used to search the entire system (starting at / ) for files with a .txt extension. Whenever one is found, its full directory entry is placed in a file named textfiles. The example below shows sample error messages that are generated when find attempts to access an unavailable directory.
$ find / -name *.txt -exec ls -l {} \; >textfiles
find: /some/directory: Permission denied
find: /another/one: Permission denied
$
The error messages can be suppressed by redirecting them to /dev/null, which is a special device that can be thought of as a wastebasket for bytes written to it on output. Everything that goes to /dev/null disappears. To redirect standard error, use a right angle bracket preceded by a 2, which is the file descriptor number for standard error. If you don't care about error messages, send them to the /dev/null byte bucket.
$ find / -name *.txt -exec ls -l {} \; 2>/dev/null >textfiles
$
The following command combines redirection and pipes to extract and bring a full list of all .txt files sorted in order by the third field in the ls -l directory entry, the owner's name.
$ find / -name *.txt -exec ls -l {} \; 2>/dev/null |sort -k 3 >textfiles
$
Shell scripts can also redirect their output, so the above command could be put into a shell script without redirection, but the output can be redirected when the command is executed.
#!/usr/bin/sh
# usertexts
#    outputs a listing of texts files on the system, ordered by owner id

find / -name *.txt -exec ls -l {} \; 2>/dev/null |sort -k 3
This shell's script could be executed with the output redirection done at the shell script level.
$ usertexts >textfiles
$
Pipes and redirection can be combined to create very powerful tools that start a text stream and then apply different tools to that stream, filtering it as it passes through different processes.
Next month, I'll take a look at more advanced uses of pipes and redirection.
 

Purging the process, Part 2

Advanced topics in pipes, filters, and redirection

Last month I covered several basics, such as input redirection:

$ grep "hello" <hello.txt
say hello.
 Purging the process: Read the whole series! 
Part 1. The basics of pipes and redirections
Part 2. Pipes and redirection: More advanced features
Output redirection:
$ grep "hello" >junk.txt
Now is the time
for every good person to
say hello.
(type control-D here)
$ cat junk.txt
say hello.
$
Input and output redirection, and the use of input files on the command line instead of redirected input:
$ grep "Now" <hello.txt >junk.txt
$ grep "Now" hello.txt >junk.txt
Appending additional data to a file using an output redirection:
$ echo "Now is the time" >hello.txt
$ echo "for every good person to" >>hello.txt
$ echo "say hello." >>hello.txt
$ cat hello.txt
Now is the time
for every good person to
say hello.
$
Redirecting standard output and standard error, and redirecting standard error to the /dev/null byte wastebasket:
$ find / -name *.txt -exec ls -l {} \; 2>/dev/null >textfiles
$
Basic pipes:
$ grep "hello" < hello.txt | sed -e "s/hello/bye/" > result.txt
$( grep "hello" | sed -e "s/hello/bye/" ) < hello.txt > result.txt
$
I also stated that redirecting output to an existing file would delete the file and create a new version of it. In the following example, the fourth line causes hello.txt to be overwritten with a new version of the file containing only a single line, bye.
$ echo "hello" >hello.txt
$ cat hello.txt
hello
$ echo "bye" >hello.txt
$ cat hello.txt
bye
You can set the noclobber option to prevent redirected files from automatically overwriting their predecessors. In the following example, the option causes an error message at line six when the user tries to overwrite the hello.txt file.
$ set noclobber
$ echo "hello" >hello.txt
$ cat hello.txt
hello
$ echo "bye" >hello.txt
File "hello.txt" already exists
$ cat hello.txt
hello
unset noclobber
If noclobber is set, you can force a redirection to clobber any pre-existing file by using the >| redirection operator. This operator looks like a redirection to a pipe, but it's actually just a force redirect to override the noclobber option. In the following example the forced redirection operator prevents any error messages.
$ set noclobber
$ echo "hello" >|hello.txt
$ cat hello.txt
hello
$ echo "bye" >|hello.txt
$ cat hello.txt
bye
unset noclobber
Combining standard output and standard error
Redirection is frequently used for jobs that run for a long period of time, or for jobs that produce a lot of output. For such jobs, redirection can capture the results in a file. When this is done, it's also necessary to capture any output errors. Remember that if you redirect standard output but not standard error, output will go to a file and error messages will still go to your screen. The following find command will save the results to found.txt, although errors still appear on the screen.
$ find / -name *.txt -exec ls -l {} \; >found.txt
find: /some/directory: Permission denied
find: /another/one: Permission denied
$
The redirection operator is actually a number followed by the redirection symbol, as in the following example. If number is omitted, 1 is the default.
$ find / -name *.txt -exec ls -l {} \; 1>found.txt
$
The following commands are equivalent:
$ find / -name *.txt -exec ls -l {} \; 1>found.txt
$ find / -name *.txt -exec ls -l {} \; >found.txt
$
Unix utilities open three files automatically when a program starts up. These files are given file descriptor numbers inside the program -- 0, 1, and 2 -- but they're more commonly known as stdin (standard input -- file descriptor 0), stdout (standard output -- file descriptor 1), and stderr (standard error -- file descriptor 2). When the program starts, default assignments for these files are made to /dev/tty, which is the device name for your terminal. The stdin file is assigned to the keyboard of your terminal, while stdout and stderr are assigned to the screen of your terminal. The output redirection operator defaults to 1; thus > and 1> are equivalent. The input redirection operators < and <0 are equivalent. Redirecting standard error, file descriptor 2, requires that its number be explicitly included in the redirection symbol.
The following examples use 1> to redirect standard output because it helps clarify how the redirection works. When reviewing these examples remember that > and 1> are the same.
One method of handling the logging problem would be to create separate logs for each of the outputs, as in the following example.
$ find / -name *.txt -exec ls -l {} \; 1>found.txt 2>errors.txt
$
It is also possible to redirect an output by attaching it to an already open redirection using the >& redirection operator. In the following example, the standard output of find is redirected to the file result.txt. The 2>&1 redirection command instructs the shell to attach the output from standard error (2) to the output of standard output (1). Now both standard output and standard error are sent to result.txt.
$ find / -name *.txt -exec ls -l {} \; 1>result.txt 2>&1
$
The order of redirection is important. In the following example, the output of file descriptor 2 (standard error) is attached to file descriptor 1. At this point, standard output is still attached to the terminal, so standard error is sent to the terminal. The next redirection sends standard output to result.txt. This redirection doesn't drag file descriptor 2 along with it, so standard error is left pointing to the terminal device.
$ find / -name *.txt -exec ls -l {} \; 2>&1 1>result.txt
find: /some/directory: Permission denied
find: /another/one: Permission denied
$
Input redirection from here documents
Perhaps one of the most useful forms of redirection is redirecting input from a here document. A shell script can be written that executes a command and serves all input to the command. This is frequently used for a command that is normally run interactively. As an extreme example, I will show you how to do this with the editor vi. I am using vi for two reasons: first, it's interactive, and second, you're probably fairly familiar with it already and so will have a better understanding of what the script's doing. Normally, hands-off editing is done with the sed command.
First, create a text file with several hello strings in it, as in the following example, then name it hello.txt.
sample hello.txt
hello world
hello broadway
hello dolly
Create a file named here.sh that contains the lines in the example below. The second line starts the vi editor on the hello.txt file and the <<END-OF-INPUT option states that vi will run taking its input from this current file, here.sh, reading in a line at a time until a single line containing END-OF-INPUT is read in. The subsequent lines are vi commands to globally search for hello, replace each instance of it with bye, write the file back out, then quit. The next line is the END-OF-INPUT line and final echo statement to indicate that the editing is complete.
# here.sh - sample here document
vi hello.txt <<END-OF-INPUT
:g/hello/s//bye/g
:w
:q!
END-OF-INPUT
echo "Editing complete"
Change the mode on the file to make it executable:
$ chmod a+x here.sh
When you execute the here.sh script, you may receive a warning from vi that it's not running in interactive mode. Next, the actual editing takes place; afterwards, you can cat out the hello.txt file and see your handiwork.
$ ./here.sh
Vim: Warning: Input is not from a terminal
Editing complete
$ cat hello.txt
sample bye.txt
bye world
bye broadway
bye dolly
If you really want to suppress the vi warning, redirect the error to the /dev/null device, as in the following version of here.sh:
# here.sh - sample here document
vi hello.txt 2>/dev/null <<END-OF-INPUT
:g/hello/s//bye/g
:w
:q!
END-OF-INPUT
echo "Editing complete"
here documents frequently appear as small pieces of larger scripts. In order to make the here portion stand out, it's helpful to indent that section of the shell. Using a minus (-) in front of the end-of-input marker eats the white spaces at the beginning of a line and prevents them from being passed on to the program. The following is an example:
# here.sh - sample here document
vi hello.txt 2>/dev/null <<-STOP-HERE
:g/hello/s//bye/g
:w
:q!
STOP-HERE
echo "Editing complete"
Because it's an interactive program, the ftp utility is a common candidate for here document status. The following example starts ftp and redirects standard output and standard error to xfr.log. The process logs in to a remote system named nj_system, switches to binary transfer mode, creates two directories, transfers a file named newstuff.a to the remote system, and signs out again. Using a here document makes it possible to execute ftp through a shell script while seeing what the script is doing. The second example below is another method of doing this, but it involves a separate file with the ftp commands.
# xfr.sh - Transfers to a remote system
district=nj
ftplog=xfr.log
insbase=/usr/installations
insdir=$insbase/new
inskit=newstuff.a
echo "Transferring to" $district
ftp 1>>$ftplog 2>&1 $district"_system" <<-ALL-DONE
        user mo ddd789
        binary
        mkdir $insbase
        chmod 777 $insbase
        mkdir "$insdir"
        chmod 777 $insdir
        put $inskit $insdir/$inskit
        chmod 777 $insdir/$inskit
        bye
ALL-DONE
echo "Transfer to" $district "complete."
The first file would have to contain nothing but the commands for ftp, and couldn't take advantage of script variables. Here's a sample input for ftp:
user mo ddd789
binary
mkdir /usr/installations
chmod 777 /usr/installations
mkdir /usr/installations/new
chmod 777 /usr/installations/new
put newstuff.a /usr/installations/new/newstuff.a
chmod 777 /usr/installations/new /newstuff.a
bye

# xfr.sh - Transfers to a remote system
district=nj
ftplog=xfr.log
echo "Transferring to" $district
ftp 1>>$ftplog 2>&1 $district"_system" <ftp_commands
echo "Transfer to" $district "complete."
In our next installment, I'll cover Unix system and global variables. What are they and how do you use them? I have been meaning to do this one for a while, and now seems like a good time.

 

Understanding Unix shells and environment variables Part 1

A shell variable is a memory storage area that can be used to hold a value, which can then be used by any built-in shell command within a single shell. An environment variable is a shell variable that has been exported or published to the environment by a shell command so that shells and shell scripts executed below the parent shell also have access to the variable.
 Unix shells and environment variables: Read the whole series! 
One built-in shell command can set a shell variable value, while another can pick it up. In the following doecho script example, $PLACE is set in the first line and picked up in the second line by the built-in echo command.
Create this script and save it as doecho. Change the mode using chmod a+x doecho:
# doecho sample variable
PLACE=Hollywood
echo "doecho says Hello " $PLACE
Run the program as shown below.
In all of the following examples, I use the convention of ./command to execute a shell script in the current directory. You don't need to do this if your $PATH variable contains the . as one of the searched directories. The ./command method works for scripts in your current directory, even if the current directory isn't included on your path.
$ ./doecho
doecho says Hello Hollywood
$
In this first example, $PLACE is a shell variable.
Now, create another shell script called echoplace and change its mode to executable.
# echoplace echo $PLACE variable
echo "echoplace says Hello " $PLACE
Modify doecho to execute echoplace as its last step.
# doecho sample variable
PLACE=Hollywood
echo "doecho says Hello " $PLACE
./echoplace
Run the doecho script. The output is a bit surprising.
$ ./doecho
doecho says Hello Hollywood
echoplace says Hello
$
In this example, echoplace is run as the last command of doecho. It tries to echo the $PLACE variable but comes up blank. Say goodbye to Hollywood.
To understand what happened here you need understand something about shell invocation -- the sequence of events that occur when you run a shell or shell script. When a shell begins to execute any command, it checks to see if the command is built-in (like echo), an executable program (like vi or grep), a user-defined function, or an executable shell script. If it's any of the first three, it directly executes the command, function, or program; but if the command is an executable shell script, the shell spawns another running copy of itself -- a child shell. The spawned child shell uses the shell script as an input file and reads it in line by line as commands to execute.
When you type ./doecho to execute the doecho script, you're actually executing a command that is something like one of the following, depending on which shell you're using. (See the Resources section at the end of this column for more information on redirection.)
$ sh < ./doecho
            (or)
$ ksh <./doecho
The new shell, spawned as a child of your starting-level shell, opens doecho and begins reading commands from that file. It performs the same test on each command, looking for built-in commands, functions, programs, or shell scripts. Each time a shell script is encountered, another copy of the shell is spawned.
I have repeated the running of doecho so you can follow it through the steps described below. The output of doecho is repeated here, with extra spacing and notes.
$ ./doecho                  <-the command typed in shell one
                              launches shell two reading doecho.
doecho says Hello Hollywood <-shell two sets $PLACE and echoes
                              Shell three starts echoplace
echoplace says Hello        <-shell three cannot find $PLACE and
                              echoes a blank
$                           <-shells three and two exit. Back at shell one
As you're looking at a prompt on the screen, you're actually running a top-level shell. If you've just logged on, this will be shell one, where you type the command ./doecho. Shell two is started as a child of shell one. Its job is to read and execute doecho. The doecho script is repeated below.
The first command in doecho creates the shell variable $PLACE and assigns the value "Hollywood" to it. At this point, the $PLACE variable only exists with this assignment inside shell two. The echo command on the next line will print out doecho says Hello Hollywood and move on to the last line. Shell two reads in the line containing ./echoplace and recognizes this as a shell script. Shell two launches shell three as a child process, and shell three begins reading the commands in echoplace.
# doecho sample variable
PLACE=Hollywood
echo "doecho says Hello " $PLACE
./echoplace
The echoplace shell script is repeated below. The only executable line in echoplace is a repeat of the echoed message. However, $PLACE only exists with the value "Hollywood" in shell two. Shell three sees the line to echo echoplace says Hello and the $PLACE variable, and cannot find any value for $PLACE. Shell three creates its own local variable named $PLACE as an empty variable. When it echoes the script, it's empty and prints nothing.
# echoplace echo $PLACE variable
echo "echoplace says Hello " $PLACE
The assignment of "Hollywood" to $PLACE in shell two is only available inside shell two. If you type in a final command in shell one to echo $PLACE at the shell one level, you'll find that $PLACE is also blank in shell one.
$ echo "shell one says Hello " $PLACE
shell one says Hello
$
Thus far, you've only created and used a variable inside of a single shell level. You can, however, publish a shell variable to the environment, thereby creating an environment variable that's available both to the shell that published it and to all child shells started by the publishing shell. Use export in the Bourne and Korn shells.
$ PLACE=Hollywood; export PLACE
$
The Korn shell also has a command that both exports the variable and assigns a value to it.
$ export PLACE=Hollywood
$
The C shell uses a very different syntax for shell and environment variables. Assign a value to a shell variable by using set, then assign an environment variable using setenv. Note that setenv doesn't use the = operator.
> set PLACE=Hollywood
> setenv PLACE Hollywood
Back in the Korn or Bourne shells, if we revisit the doecho script and edit it to export the $PLACE variable, it becomes available in shell two (the publishing shell) and shell three (the child shell).
# doecho sample variable
PLACE=Hollywood; export PLACE
echo "doecho says Hello " $PLACE
./echoplace
When doecho is run, the output is changed. This happens because in shell three $PLACE is found as an environment variable that has been exported from shell two.
$ ./doecho
doecho says Hello Hollywood
echoplace says Hello Hollywood
$
Assigning a value to $PLACE before you run doecho will help you verify its scope. After doecho is complete, echo the value of $PLACE at the shell one level. Notice that doecho in shell two and echoplace in shell three both see $PLACE's value as "Hollywood", but the top-level shell sees the value "Burbank". This is because $PLACE was exported in shell two. The environment variable $PLACE has scope in shell two and shell three, but not in shell one. Shell one creates its own local shell variable named $PLACE that is unaffected by shells two and three.
$ PLACE=Burbank
$ ./doecho
doecho says Hello Hollywood
echoplace says Hello Hollywood
$ echo "shell one says Hello " $PLACE
$ shell one says Hello Burbank
$
Once a shell variable has been exported and become an environment variable, it can be modified by a subshell. The modification affects the environment variable at all levels where the environment variable has scope.
Make some changes to doecho by adding a repeat of the echo line after the return from echoplace.
# doecho sample variable
PLACE=Hollywood
echo "doecho says Hello " $PLACE
./echoplace
echo "doecho says Hello " $PLACE
After it has been echoed, modify echoplace to change the value of $PLACE. Once this is done, echo it again.
# echoplace echo $PLACE variable
echo "echoplace says Hello " $PLACE
PLACE=Pasadena
echo "echoplace says Hello " $PLACE
Retype the previous sequence of commands as shown below. Shell three alters the value of $PLACE, a change that appears in shell three -- and in shell two, even after it returns from echoplace. Once a variable is published to the environment, it's fair game to any shell at or below the publishing level.
$ PLACE=Burbank
$ ./doecho
doecho says Hello Hollywood
echoplace says Hello Hollywood
echoplace says Hello Pasadena
doecho says Hello Pasadena
$ echo "shell one says Hello " $PLACE
$ shell one says Hello Burbank
$
You have seen that the default action of a shell is to spawn a child shell whenever a shell script is encountered on the command line. Such behavior can be suppressed by using the dot command, which is a dot and a space placed before a command.
Execute doecho by starting it with a dot and a space, then echo the value of $PLACE when doecho is complete. In this example, shell one recognizes $PLACE as having been given the value "Pasadena".
$ . ./doecho
doecho says Hello Hollywood
echoplace says Hello Hollywood
echoplace says Hello Pasadena
doecho says Hello Pasadena
$ echo "shell one says Hello " $PLACE
$ shell one says Hello Pasadena
$
Normally, when a shell discovers that the command to execute is a shell script, it would spawn a child shell and that child would read in the script as commands. If the shell script is preceded by a dot and a space, however, the shell stops reading the current script or commands and starts reading in the new script or commands without starting a new subshell.
When you type in . ./doecho, shell one doesn't spawn a child shell, but instead switches gears and begins reading from doecho. The doecho script initializes and exports the $PLACE variable. The export of $PLACE now affects all shells because you exported it at the shell one level.
A dot script is very useful for setting up a temporary environment that you don't want to set up in your .profile. Suppose for instance that you have a specialized task that you do only on certain days, and that you need to set up some special environment variables for it. Place these variables in a file named specvars.
# specvars contains special variables for the
# special task that I do sometimes
WORKDIR=/some/dir
SPECIALVAR="Bandy Legs"
REPETITIONS=52
export WORKDIR SPECIALVAR REPETITIONS
If you execute this variable by simply typing in the name of the specvars file, you won't get the expected effect because a subshell, shell two, is created to execute specvars and the export command exports to shell two and below. Shell one doesn't view these exports as environment variables.
$ specvars
$ echo "WORKDIR IS " $WORKDIR
WORKDIR is
$ 
Using the dot command causes the script to execute as part of shell one; the effect is now correct.
$ . specvars
$ echo "WORKDIR IS " $WORKDIR
WORKDIR is /some/dir
$ 
So there you have some of the ins and outs of shell and environment variables, as well as some ways to get around some of their limitations. If you want to see your current environment variables, type the printenv command; a list of all variables that are available to the current shell, including all child shells, is printed out.

Understanding Unix shells and environment variables, Part 2

Examine and customize your Unix environment

Unix shells come with variables that are used by the shell or related commands. In addition to variables that you create, the shell itself requires or takes advantage of variables that can be set up for it. When you first log in to a Unix system, the /etc/passwd file contains the name of the shell that is to be run for you. This appears in the last field of the password file. To see yours, type cat /etc/passwd and pipe the result through grep looking for your userid. In the example below I have used my id, mjb.

$ cat /etc/passwd|grep mjb
mjb:500:500::/home/mjb:/bin/ksh

 Unix shells and environment variables: Read the whole series! 
In this example, my logon runs the Korn shell. This shell reads and executes any existing file named /etc/profile, which a system administrator has programmed for basic setup actions required for all users. After I execute /etc/profile, I execute $HOME/.profile. This is set up to contain my own environment. Both /etc/profile and $HOME/.profile set environment variables. The Bourne shell works in a similar fashion. The C shell also takes a similar approach, but uses more files. It runs /etc/csh.cshrc, then /etc/csh.login, then an entire raft of files in your home directory, such as ~/.cshrc, ~/.history, ~/.login, and, finally, ~/.cshdirs.
Regardless of the approach, the result is an environment in which the user will run, including environment variables. You can see your environment variables by using printenv or env. The following is a short example of the output.

$ printenv
USERNAME=
HISTSIZE=1000
HOSTNAME=my.system.com
LOGNAME=mjb
MAIL=/var/spool/mail/mjb
TERM=xterm
PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin/X11:/home/mjb/bin
HOME=/home/mjb
SHELL=/bin/ksh
PS1=[\u@\h \W]\$
Shells also use variables that are not part of the environment. For a description of the difference between shell and environment variables, see last month's column.
For example, PS1, listed above as an environment variable, is the prompt displayed on the screen when the shell is waiting for a new command. Another shell variable, PS2, contains the prompt to be used when a command is begun but not completed before Enter is pressed. To see the prompt in use, type the commands below. The first echoes the $PS2 prompt to the screen. Then a new command is started with an opening parenthesis. The user presses Enter immediately and the shell waits for a command and a closing parenthesis. The shell displays the > prompt to indicate that it is waiting for more input. The command is entered and Enter is pressed. Once again, the > prompt is displayed, because the user has not yet closed the open parenthesis. Finally, the user types ) and presses Enter, ending the command.

$ echo $PS2
>
$ (
> cat /etc/passwd|grep mjb
> )
$ 
You can create a more graphic version of this by adding a command to change the $PS2 prompt. In the following example, the value of the $PS2 prompt is changed and the same command sequence is entered. The $PS2 prompt is reset.

$ echo $PS2
>
$ PS2="more please> "
$ (
more please > cat /etc/passwd|grep mjb
more please > )
$ PS2="> "
# echo $PS2
>
$
Why does the PS2 prompt have a value if it is not in the environment? Look at the printenvlisting and you will not see an entry for PS2.

$ printenv
USERNAME=
HISTSIZE=1000
HOSTNAME=my.system.com
LOGNAME=mjb
MAIL=/var/spool/mail/mjb
TERM=xterm
PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin/X11:/home/mjb/bin
HOME=/home/mjb
SHELL=/bin/ksh
PS1=[\u@\h \W]\$
The shell sets up some default shell variables; PS2 is one of them. Other useful shell variables that are set or used in the Korn shell are:

  • _ (underscore) -- When an external command is executed by the shell, this is set in the environment of the new process to the path of the executed command. In interactive use, this parameter is also set in the parent shell to the last word of the previous command.
  • COLUMNS -- The number of columns on the terminal or window.
  • ENV -- If this parameter is found to be set after any profile files are executed, the expanded value is used as a shell startup file. It typically contains function and alias definitions.
  • ERRNO -- Integer value of the shell's errno variable -- this indicates the reason the last system call failed.
  • HISTFILE -- The name of the file used to store history. When assigned, history is loaded from the specified file. Multiple invocations of a shell running on the same machine will share history if their HISTFILE parameters all point to the same file. If HISTFILE isn't set, the default history file is $HOME/.sh_history.
  • HISTSIZE -- The number of commands normally stored in the history file. Default value is 128.
  • IFS -- Internal field separator, used during substitution and by the read command to split values into distinct arguments; normally set to space, tab, and newline.
  • LINENO -- The line number of the function or shell script that is being executed. This variable is useful for debugging shell scripts. Just add an echo $LINENO at various points and you should be able to determine your location within a script.
  • LINES -- Set to the number of lines on the terminal or window.
  • PPID -- The process ID of the shell's parent. A read-only variable.
  • PATH -- A colon-separated list of directories that are searched when seeking commands.
  • PS1 -- The primary prompt for interactive shells.
  • PS2 -- Secondary prompt string; default value is >. Used when more input is needed to complete a command.
  • PWD -- The current working directory. This may be unset or null if shell does not know where it is.
  • RANDOM -- A simple random number generator. Every time RANDOM is referenced, it is assigned the next number in a random number series. The point in the series can be set by assigning a number to RANDOM.
  • REPLY -- Default parameter for the read command if no names are given.
  • SECONDS -- The number of seconds since the shell started or, if the parameter has been assigned an integer value, the number of seconds since the assignment plus the value that was assigned.
  • TMOUT -- If set to a positive integer in an interactive shell, it specifies the maximum number of seconds the shell will wait for input after printing the primary prompt (PS1). If this time is exceeded, the shell exits.
  • TMPDIR -- Where the directory shell temporary files are created. If this parameter is not set, or does not contain the absolute path of a directory, temporary files are created in /tmp.
The C shell uses variables with similar but lowercase names, such as prompt1, prompt2, path, home, and so on.
Other interesting variables are the locale setting variables. These variables are LC_ALL, LC_CTYPE, LC_COLLATE, and LC_MESSAGES. LC_ALL effectively overrides the values for the other three LC variables; you can set them independently by not setting LC_ALL.

  • LC_ALL -- Determines the locale to be used to override any previously set values.
  • LC_COLLATE -- Defines the collating sequence to use when sorting.
  • LC_CTYPE -- Determines the locale for the interpretation of a sequence of bytes.
  • LC_MESSAGES -- Determines the language in which messages should be written.
LC_ALL can be used to change the language for the system. Try the following sequence of commands below to see these in action. The language is changed to French (fr) and grep is invoked with an illegal option -x. The error message appears in French. The LC_ALL is set to Spanish (español, thus es) and the error and error message are repeated. Finally LC_ALL is unset and the error returns in English.

$ export LC_ALL=fr
$ grep -x
Usage: grep [OPTION]...PATRON [FICHIER]
Pour en savoir davantage, faites: 'grep --help'
$ LC_ALL=es
$ grep -x
Modo de empoleo: grep [OPCION]...PATRON [FICHERO]
Pruebe 'grep --help' para mas informacion
$ unset LC_ALL
$ grep -x
Usage: grep [OPTION]...PATTERN [FILE]
Try 'grep --help' for more information.
$
End of article.

The language of shells


Making sense of shell commands

Summary
Working with shells can be difficult, as they require unusual and specific combinations of words and punctuation. This month, Mo Budlong helps you out by explaining some basic commands, such as ls, echo, and man. Also, Mo corrects a problem from May's Unix 101 in a sidebar. (1,300 words)

From the end user's perspective, the shell is the most important program on the Unix system because it is the user's interface to the Unix system kernel. The shell reads and interpreting strings of characters and words.
The shells operate in a simple loop:
  1. Accept a command
  2. Interpret the command
  3. Execute the command
  4. Wait for another command
The shell displays a prompt, notifying the user that it is ready to accept a command. It would be nice if you could speak or type instructions into the computer in some form of natural language.
OK, Hal. Sort out my correspondence, throw out anything
that is too old, and archive the rest.
Unfortunately, the shell recognizes a very limited set of command words, so the user must offer commands in a way that it understands. This means learning to string odd words and punctuation together.
Each shell command consists of a command name, followed, if desired, by command options and arguments. The command name, options, and arguments are separated by blank space.
The shell is one of many programs that the Unix kernel can run for you. When the kernel is running a program, that program is called a process. The kernel can run the same program many times (one shell for each user), and each running copy of the program is a separate process. Because each user runs a separate copy of the shell, each user is running in his or her own process space.
Many basic shell commands are subroutines that are built in to the shell program. The echo command is almost always built in to a shell.
$ echo "Hello, Hal"
Hello Hal
$
Commands not built in to the shell require that the kernel start another process in order to run.
When you execute a command that is not built in to a shell, the shell asks the kernel to create a new subprocess (or child process) to perform the command. The child process exists just long enough to execute the command. The shell waits for the child process to finish before accepting the next command.
The basic form of a Unix command is:
command name [-options] [arguments] 
The square brackets signify parts of the command that may be omitted.
The command name is the name of a built-in command or a separate program you want the shell to execute. The command options, usually indicated by a dash, allow you to alter the behavior of the command. The arguments are the names of files, directories, or programs that the command needs to access.
ls -l /home/mjb
The ls command is usually a separate program rather than a built-in command. The command above will get you a long listing of the contents of the /home/mjb directory. In this example, ls is the command name, -l is an option that tells ls to create a long, detailed output, and /home/mjb is an argument naming the directory that ls is to list.
The Unix shell is case sensitive, and most Unix commands are lower case.
Some of the more popular shells are sh (the Bourne shell), ksh (the Korn shell), csh (the C shell), bash, (the Bourne Again shell), pdksh (the Public Domain Korn shell), and tcsh (the Tiny C shell).
You can frequently identify your shell by typing:
echo $SHELL
Unix recognizes certain special characters as command directives. If you use a special character in a command, make sure you understand what it does. The special characters are / < > ! $ % ^ & * | { } ~ and ;. When naming files and directories on Unix, it is safest to only use numerals, upper and lower case letters, and the period, dash, and underscore characters.
A Unix command line is a sequence of characters in the syntax of the target shell language. Of the characters in a command line, some are known as metacharacters. Metacharacters have a special meaning to the shell. The metacharacters in the Korn shell are:
  • ; -- Separates multiple commands on a command line
  • & -- Causes the preceding command to execute asynchronously (as its own separate process so that the next one does not wait for it to complete)
  • () -- Enclose commands that are to be launched in a separate shell
  • | -- Pipes the output of the command to the left of the pipe to the input of the command on the right of the pipe
  • > -- Redirects output to a file or device
  • >> -- Redirects output to a file or device and appends to it instead of overwriting it
  • < -- Redirects input from a file or device
  • newline -- Ends a command or set of commands
  • space -- Separates command words
  • tab -- Separates command words
Some metacharacters can be used in combinations, such as ||, &&, and >>. With these metacharacters you can define a command-line word, which is a sequence of characters separated by one or more nonquoted metacharacters.
To access the online manuals, use the man command, followed by the name of the command you need help with. For instance, to see the manual for the ls command, enter:
man ls
End of article.
A note to my readers
 
I would like to note a correction to the May edition of Unix 101, in which I said:
"Once a shell variable has been exported and becomes an environment variable, it can be modified by a subshell. The modification affects the environment variable at all levels where the environment variable has scope."
Several sharp eyed readers picked up on this and sent comments ranging from, "Oh, no, you can't" to "Gee, whiz, which shell are you using? It doesn't work for me."
They are right. A subshell cannot modify an environment variable and return it to the parent. It can modify an environment variable and pass it on to a child process, but it cannot return the new value to a higher level. To illustrate this correctly, create the following three script files and grant them execute privileges using chmod a+x script*.

# script1
myvar="Hello" ; export myvar
echo "script1:myvar=" $myvar
./script2
echo "Back from script1 and script2
echo "script1:myvar=" $myvar

# script2
myvar="Goodbye"
echo "script2:myvar=" $myvar
./script3

# script3
echo "script3:myvar=" $myvar
If you run this sequence, the results show that $myvar exists in all three scripts (and, consequently, in all three processes), but modifying it in script2 only affects its value in script3.

$ ./script1
script1:myvar= Hello
script2:myvar= Goodbye
script3:myvar= Goodbye
Back from script 1 and 2
script1:myvar= Hello
$
My apologies to those of you who tried to make the example in the May issue work.

Using cron basics

Utility helps you get your timing right

Summary
Cron allows you to program jobs to be performed at specific times or at steady intervals. This month Mo Budlong explains some cron fundamentals and runs an experiment. (1,500 words)

At one time cron was easy to describe: It involved only one or two files. All you had to do was edit the files and -- voilà! -- cron did the rest. Now cron has become several files and several programs, and at first glance it seems quite complex. Fortunately, someone was clever enough to create a simplified interface along with the new complexity.
Cron is really two separate programs. The cron daemon, usually called cron or crond, is a continually running program that is typically part of the booting-up process.
To check that it's running on your system, use ps and grep to locate the process.
ps -ef|grep cron
root    387      1   0   Jun 29 ?     00:00:00 crond
root  32304  20607   0   00:18 pts/0  00:00:00 grep cron
In the example above, crond is running as process 387. Process 32304 is the grep cron command used to locate crond.
If cron does not appear to be running on your system, check with your system administrator, because a system without cron is unusual.
The crond process wakes up each minute to check a set of cron table files that list tasks and the times when those tasks are to be performed. If any programs need to be run, it runs them and then goes back to sleep. You don't need to concern yourself with the mechanics of the cron daemon other than to know that it exists and that it is constantly polling the cron table files.
The cron table files vary from system to system but usually consist of the following:
  • Any files in /var/spool/cron or /var/spool/cron/crontabs. Those are individual files created by any user using the cron facility. Each file is given the name of the user. You will almost always find a root file in /var spool/cron/root. If the user account named jinx is using cron, you will also find a jinx file as /var/spool/cron/jinx.
    ls -l /var/spool/cron
    -rw-------   1  root    root          3768 Jul 14  23:54  root
    -rw-------   1  root    group          207 Jul 15  22:18  jinx
  • A cron file that may be named /etc/crontab. That is the traditional name of the original cron table file.
  • Any files in the /etc/cron.d directory.
Each cron table file has different functions in the system. As a user, you will be editing or making entries into the /var/spool/cron file for your account.
Another part of cron is the table editor, crontab, which edits the file in /var/spool/cron. The crontab program knows where the files that need to be edited are, which makes things much easier on you.
The crontab utility has three options: -l, -r, and -e. The -l option lists the contents of the current table file for your current userid, the -e option lets you edit the table file, and the -r option removes a table file.
A cron table file is made up of one line per entry. An entry consists of two categories of data: when to run a command and which command to run.
A line contains six fields, unless it begins with a hash mark (#), which is treated as a comment. The six fields, which must be separated by white space (tabs or spaces), are:
  1. Minute of the hour in which to run (0-59)
  2. Hour of the day in which to run (0-23)
  3. Day of the month (0-31)
  4. Month of the year in which to run (1-12)
  5. Day of the week in which to run (0-6) (0=Sunday)
  6. The command to execute
As you can see, the "when to run" fields are the first five in the table. The final field holds the command to run.
An entry in the first five columns can consist of:
  • A number in the specified range
  • A range of numbers in the specified range; for example, 2-10
  • A comma-separated list consisting of individual numbers or ranges of numbers, as in 1,2,3-7,8
  • An asterisk that stands for all valid values
Note that lists and ranges of numbers must not contain spaces or tabs, which are reserved for separating fields.
A sample cron table file might be displayed with the crontab -l command. The following example includes line numbers to clarify the explanation.
1     $ crontab -l
2     # DO NOT EDIT THIS FILE
3     # installed Sat Jul 15
4     #min    hr   day   mon   weekday  command
6     30      *     *     *     *       some_command
7     15,45   1-3   *     *     *       another_command
8     25      1     *     *     0       sunday_job
9     45      3     1     *     *       monthly_report
10    *       15    *     *     *       too_often
11    0       15    *     *     1-5     better_job
$
Lines 2 through 4 contain comments and are ignored. Line 6 runs the command some_command at 30 minutes past the hour. Note that the fields for hour, day, month, and weekday were all left with the asterisk; therefore some_command runs at 30 minutes past the hour, every hour of every day.
Line 7 runs the command another_command at 15 and 45 minutes past the hour for hours 1 through 3, namely, 1:15, 1:45, 2:15, 2:45, 3:15, and 3:45 a.m.
Line 8 specifies that sunday_job is to be run at 1:25 a.m., only on Sundays.
Line 9 runs monthly_report at 3:45 a.m. of the first day of each month.
Line 10 is a typical cron table entry error. The user wants to run a task daily at 3 p.m., but has only entered the hour. The asterisk in the minute column causes the job to run once every minute for each minute from 3:00 p.m. through 3:59 p.m.
Line 11 corrects that error and adds weekdays 1 through 5, limiting the job to 3:00 p.m., Monday through Friday.
Now that you know cron basics, try the following experiment. Cron is usually used to run a script, but it can run any command. If you do not have cron privileges, you will have to follow as best you can, or work with someone who has them.
Use the crontab editor to edit a new crontab entry. In this example I am asking cron to execute something every minute.
$crontab -e
0-59    *    *    *    *    echo `date` "Hello" >>$HOME/junk.txt
$
The sixth field contains the command to echo the output from date (note the reverse quotes around date), followed by "Hello", and also the command to append the result to a file in my home directory, which is named junk.txt.
Close this cron table file. If you have cron privileges and have entered the command correctly, you will receive a receive that the file has been saved.
Use crontab -l to view the file.
$ crontab -l
# DO NOT EDIT THIS FILE
# installed Sat Jul 15
0-59    *    *    *    *    echo `date` "Hello" >>$HOME/junk.txt
$
Change to your home directory, use the touch command to create junk.txt in case it does not exist, and then use tail -f to open the file and display the contents line by line as they are inserted by cron.
$ cd
$ touch junk.txt
$ tail -f junk.txt
Sat Jul 15 15:23:07 PDT Hello
Sat Jul 15 15:24:07 PDT Hello
Sat Jul 15 15:25:07 PDT Hello
Sat Jul 15 15:26:07 PDT Hello
The screen will update once per minute as the information is inserted into junk.txt.
Stop the display by pressing Control-D.
Be sure to clean up the cron table files by using the crontab -e option to open the cron table file and remove the line you just created.
All commands executed by cron should run silently with no output. Because cron runs as a detached job, it has no terminal to write messages to. However, the best-laid plans of mice, men, and programmers are not without deviations from the expected course, and it is entirely possible that a command, script, or job may produce output or, heaven forbid, some actual error messages.
To handle that, cron traps all the output to standard out or to standard error that has not been redirected to a file, as in the example just tested. The trapped output is dropped into a mail file and is sent either to the user who originated the command or to root. Either way, it conveniently traps errors without forcing cron to blow up or abort.

Traveling down the Unix $PATH

Why are some commands executed, and some ./executed?

Summary
Why do some commands need a dot-slash in order to run? In this month's Unix 101 column, Mo Budlong explores the answer to this question, and explains the difference between built-in and executable commands. (1,200 words)

This article is based on a question that came out of July's installment of Unix 101:
Why is it that some commands can simply be executed, while others must be ./executed? In other words, why do some commands need a dot-slash in front of them to run? Rather than giving you a short answer, I am going to explore a couple of things and hope you find them enlightening.
If you create a new shell script, will you be able to run it with the first command below, or will you need to resort to the second?
$ newscript
$ ./newscript
$
Commands in Unix are either builtins or executables. Builtins are part of the shell you are currently running. Examples include echo, read, and export.
Any command that is not built in must be an executable. There are two types of executables: shell script languages, such as sh, ksh, csh, or perl, or compiled executables, such as a program written in C and compiled down to a binary.
Commands created by using an alias in ksh also break down into these two main categories, because the command is translated and then issued as either a builtin or an executable. The following examples create aliases for the builtin echo and the executable grep.
The shell can always locate builtins, because they are built in to the currently executing shell.
$ alias sayit='echo '
$ alias g='grep '
In each case, after alias substitution is completed, the command becomes a builtin or an executable.
You can use the type command to verify the nature of echo.
$ type echo
echo is a shell builtin
$
Now use the type command to check on grep. type will give you the directory that contains the executable grep program.
$ type grep
grep is /bin/grep
$
Whether you enter grep as a command or ask for its location using type, the operating system finds grep by using the $PATH environment variable. If type can find grep, then echoing out the $PATH variable will verify that the path to the directory containing grep is part of the $PATH variable.
$ type grep
grep is /bin/grep
$ echo $PATH
/bin:/usr/bin:/usr/local/bin:/home/mjb/bin
$
The directories listed in $PATH are separated by colons. The above example includes /bin, /usr/bin, /usr/local/bin, and /home/mjb/bin. As an aside, the type command is probably a builtin.
$ type type
type is a shell builtin
$
Another useful command similar to type is whereis, which will usually locate a command and its manual entry.
$ whereis grep
grep: /bin/grep /usr/man/man1/grep.1
$
The shell reads and interprets strings of characters and words typed at the keyboard. Unix shells operate in a simple loop:
  1. Accept a command
  2. Interpret the command
  3. Execute the command
  4. Wait for another command
In step 3, the shell searches for the command to be executed first in the shell itself and then in each of the directories listed in the $PATH. If it can't be found in one of these path directories, an error results.
$ zowie
zowie: command not found
$
It's important to note that the shell does not search the current directory unless that directory happens to be in the $PATH variable. This is important to understand, especially if you came to Unix from an MS-DOS background. MS-DOS uses a PATH variable as well, but it searches the user's the current directory before it searches in any directories in the user's PATH.
Some users have had the foresight to include the current directory in their $PATH variable. This will appear as a single dot, the Unix shorthand for current directory. Note the dot at the end of the $PATH variable below.
$ echo $PATH
/bin:/usr/bin:/usr/local/bin:/home/mjb/bin:.
If you have the dot in your $PATH variable, create a new directory under your home directory, such as $HOME/temp, and change to it.
$ cd $HOME
$ mkdir temp
$cd temp
$
Use the vi editor to create a simple script.
# sayhello
echo "Hello"
Save it and change the mode to executable.
$ chmod a+x sayhello
$
If you have the dot in your $PATH variable, you'll be able to execute the command directly.
$ sayhello
Hello
$
If you don't have a dot in your $PATH variable, the computer will search through your $PATH (anywhere but the current directory) and report failure.
$ sayhello
sayhello: command not found
$
If you type an unadorned command such as sayhello, the computer searches for it. However, if you apply any additional path information to the command, the shell assumes that you are giving an absolute path and only looks where you tell it to. Consequently, ./sayhello locates the command in the current directory.
$ ./sayhello
Hello
$
Obviously, the dot-slash version works whether or not you have a dot in your $PATH variable, because the dot-slash precludes the shell's search for the command.
To get a dot into your $PATH if you don't have one, you need to edit your personal startup profile, usually called .profile and located in your $HOME directory. Look for a line that exports the PATH variable, such as line 5 below. (Line numbers are included here for easy reference, but are not part of the file.) This file already has a line 4 that includes some local additions to the default $PATH.
1.  # .profile
2.  # User specified environment
3.  USERNAME="mjb"
4.  PATH=$PATH:$HOME/bin
5.  export USERNAME PATH
If line 4 did not exist, you'd want to create a line that read:
PATH=$PATH:.
In this case, edit line 4 to read:
PATH=$PATH:$HOME/bin:.
Now, whenever you log in, the dot is added to your search $PATH for commands.
So, the simple rule is: if you want to execute any command in any directory not on your $PATH, including the current directory, you must specify a path to locate the command. This includes a ./ for the current directory.