Tuesday, March 1, 2011
Thursday, February 24, 2011
Staright forward translation of English movie titles.
Tomorrow never does: repu enthaki saavadu.
Gold finger: bangaaru velu
Mummy = Amma
Mummy returns= thirigochina Amma
true lies= nijam abaddam aadindi
Terminator: muginchuvaadu
I know what you did last summer: poyina vesavilo nuvvem chesaavo naaku thelsu
Hell Boy: narakapu pilladu.
Fantastic four: adbhuthamina aa naluguru.
Angels and daemons: devathalu mariyu deyyalu
Evil dead: maa chedda chaavu
Evil dead 2: maa chedda chaavu rendosaari
Evil dead 3 : maa chedda chaavu moodosaari
salt : uppu
Rising bull: piki legusthunna yeddu
Pulp fiction: Gujju gharshana
I am legend: nenu chala goppavaadini.
A Nightmare On Elm Street: ELM veedhilo peedakala |
Iron Man: inapa manishi
I know who killed me: nannu sampinodu naaku thelsu
I cant think straight: nenu thinnaga aalochinchalenu.
Men In Black: Cheekatilo magaallu
Tomb rider: samaadhula meeda swari chesedi.
Mission Impossible: Asalu emi cheyyalemu
G I Joe:The rise of Cobra: G I Joe mariyu piki lesina thachupaamu.
Gone in 60 sec: nimishamlo poyindi.
Gone woth the wind: Gaalitho paatu poyindi.
paranormal actuivity: asaadhaaranamayina charya.
Hurt locker: Noppini bhandinchevaadu.
Priest: poojaari
vampire kiss: pisacham pettina muddu.
Posted by Appleman at 3:04 AM 0 comments
Wednesday, February 9, 2011
20 Days to the Top free download

book is a winner! I've read many sales books offering the same tired
formulas and "power closes" designed to trap unsuspecting consumers into
a deceitful sales web. Refreshingly, Brian Sullivan offers a proven,
duplicatable formula based on learning what the customer really wants,
and giving it to them in an ethical way they find hard to resist. One
problem with most sales books and training is that the student has no
way to easily remember and implement what they've learned, so the
initial enthusiasm quickly wears off and sales people resort to their
old way of doing things. With easy to remember acronyms and PRECISE call
sheets, you'll soon be asking CLEAR questions and using SHARP responses
to customer concerns, and having more fun and making a lot more money
along the way. Buy this book and become a PRECISE selling superstar.
Posted by Appleman at 5:14 AM 0 comments
India's 50 Most Powerful People 2009

Posted by Appleman at 5:12 AM 0 comments
Labels: Interesting, News
Linux+ Certification Bible free download

Posted by Appleman at 5:11 AM 0 comments
Labels: E-Books, Education, Engineering
Purging the process Part 1
Introduction to pipes, filters, and redirection, Part 1
Summary
If you've arrived at Unix from the graphical user interface (GUI) world of Windows or Mac OS, you're probably not familiar with pipes and filters. Even among character-based interfaces, only a few of them, such as MS-DOS, provide even rudimentary pipes and redirection.
Redirection allows a user to redirect output that would normally go to the screen and instead send it to a file or another process. Input that normally comes from the keyboard can be redirected to come from a file or another process.
Purging the process: Read the whole series! | |
---|---|
|
/dev/tty
, which is the device name for your terminal. The stdin file is assigned to the keyboard of your terminal, while stdout and stderr are assigned to its screen. grep
. Type a grep
command to find lines containing the word hello, then type the following lines at your terminal. At the end of each line press Enter to move down to the next line. Watch what happens as you type say hello
. $ grep "hello" Now is the time for every good person to say hello.
$ grep "hello" Now is the time for every good person to say hello. say hello.
grep
. Control-D is an end-of-file marker and can be entered as a keystroke to stop any utility that is taking its input from the keyboard. grep "hello"
line is a command to search standard input for lines containing hello and echo any such line found to standard output. The Unix console automatically echoes anything you type, so the three lines appear on the screen as you type them. Then grep
hits a line containing hello and decides to output it to standard out, and say hello
appears on the screen a second time. The second appearance is the output from grep
. >
) as shown in the example below. The same grep
command is redirected to send its output to a file named junk.txt
. The say hello
line doesn't appear a second time because it's been directed to the junk.txt
file. After the user presses Control-D, cat
is used to display the contents of junk.txt
, which contains grep
's single output line. $ grep "hello" >junk.txt Now is the time for every good person to say hello. (type control-D here) $ cat junk.txt say hello. $
<
). In order to demonstrate this, we need a file that can be used for input. Use vi to create the following sample file and save it as hello.txt
. Now is the time for every good person to say hello.
grep
is the single say hello
. Because input is being drawn from a file, you don't need to use Control-D to stop the process. $ grep "hello" <hello.txt say hello.
grep
starts up, it takes its input from hello.txt
and outputs the result to junk.txt
. There is no output on the screen, but you can use cat
to display junk.txt
and verify the contents. $ grep "hello" <hello.txt>junk.txt $ cat junk.txt say hello. $
junk.txt
has been replaced with the new output from grep
, the single line Now is the time
$ grep "Now" <hello.txt >junk.txt $ cat junk.txt Now is the time $
grep
is designed to search for a string in a file, or files, it uses a command-line syntax that lets you name a file on the command line, and the input redirection symbol is not needed. Internally, grep
checks if a file is named on the command line and opens and uses it. If no file name is found, standard input is used. The following command lines for grep
have the identical effect. hello.txt
to standard input and uses it for input; the second command opens hello.txt
as a file and uses it for input. grep
doesn't expect an output file to be named on the command line. To get the output into a file, you must use output redirection. It doesn't hurt to redirect grep
input, but in the case of grep
, the redirection is already taken care of for you on the command line. $ grep "Now" <hello.txt >junk.txt $ grep "Now" hello.txt >junk.txt
>>
). The following example uses echo
, which normally outputs to the screen, to create the hello.txt
file without using an editor. The output of the echo
command is redirected into the file, and two more lines are appended to it. $ echo "Now is the time" >hello.txt $ echo "for every good person to" >>hello.txt $ echo "say hello." >>hello.txt $ cat hello.txt Now is the time for every good person to say hello. $
|
) is used as a connector between the two programs. In the following example, look at the first part of the command up to the first pipe symbol. The cat
command normally outputs to the screen; in this case, however, the output has been sent into a pipe. On the righthand side of the pipe, this output becomes the input to grep "hello"
. The output from grep "hello"
is in turn sent into another pipe. On the right side of that pipe, the output is used as standard input to a sed
command that searches for hello and replaces it with bye. The final result is redirected to a file named result.txt
which cat
displays on the screen as say bye
. $cat hello.txt | grep "hello" | sed -e "s/hello/bye/" > result.txt $cat result.txt say bye. $
rm
steps to clean up the intermediate work files that were created. $cat hello.txt >wrk1.txt $ grep "hello" <wrk1.txt >wrk2.txt $ sed -e "s/hello/bye/" <wrk2.txt >result.txt $cat result.txt say bye. $rm wrk1.txt wrk2.txt
hello.txt
into the grep
command could also be done in several other ways. Two examples are shown below. The first redirects input to grep
from hello.txt
on the lefthand side of the pipe; the second puts parentheses around the grep
and sed
commands, groups them as a subprocess, then redirects input and output to the grouped process. $ grep "hello" < hello.txt | sed -e "s/hello/bye/" > result.txt $( grep "hello" | sed -e "s/hello/bye/" ) < hello.txt > result.txt $
So far I've only shown you how to pipe and redirect standard output, but it's frequently useful to do something with error output. In the following example,
find
is being used to search the entire system (starting at /
) for files with a .txt
extension. Whenever one is found, its full directory entry is placed in a file named textfiles
. The example below shows sample error messages that are generated when find
attempts to access an unavailable directory. $ find / -name *.txt -exec ls -l {} \; >textfiles find: /some/directory: Permission denied find: /another/one: Permission denied $
/dev/null
, which is a special device that can be thought of as a wastebasket for bytes written to it on output. Everything that goes to /dev/null
disappears. To redirect standard error, use a right angle bracket preceded by a 2, which is the file descriptor number for standard error. If you don't care about error messages, send them to the /dev/null
byte bucket. $ find / -name *.txt -exec ls -l {} \; 2>/dev/null >textfiles $
.txt
files sorted in order by the third field in the ls -l
directory entry, the owner's name. $ find / -name *.txt -exec ls -l {} \; 2>/dev/null |sort -k 3 >textfiles $
#!/usr/bin/sh # usertexts # outputs a listing of texts files on the system, ordered by owner id find / -name *.txt -exec ls -l {} \; 2>/dev/null |sort -k 3
$ usertexts >textfiles $
Purging the process, Part 2
Advanced topics in pipes, filters, and redirection
Last month I covered several basics, such as input redirection:
$ grep "hello" <hello.txt say hello.
Purging the process: Read the whole series! | |
---|---|
|
$ grep "hello" >junk.txt Now is the time for every good person to say hello. (type control-D here) $ cat junk.txt say hello. $
$ grep "Now" <hello.txt >junk.txt $ grep "Now" hello.txt >junk.txt
$ echo "Now is the time" >hello.txt $ echo "for every good person to" >>hello.txt $ echo "say hello." >>hello.txt $ cat hello.txt Now is the time for every good person to say hello. $
/dev/null
byte wastebasket: $ find / -name *.txt -exec ls -l {} \; 2>/dev/null >textfiles $
$ grep "hello" < hello.txt | sed -e "s/hello/bye/" > result.txt $( grep "hello" | sed -e "s/hello/bye/" ) < hello.txt > result.txt $
hello.txt
to be overwritten with a new version of the file containing only a single line, bye
. $ echo "hello" >hello.txt $ cat hello.txt hello $ echo "bye" >hello.txt $ cat hello.txt bye
noclobber
option to prevent redirected files from automatically overwriting their predecessors. In the following example, the option causes an error message at line six when the user tries to overwrite the hello.txt
file. $ set noclobber $ echo "hello" >hello.txt $ cat hello.txt hello $ echo "bye" >hello.txt File "hello.txt" already exists $ cat hello.txt hello unset noclobber
noclobber
is set, you can force a redirection to clobber any pre-existing file by using the >|
redirection operator. This operator looks like a redirection to a pipe, but it's actually just a force redirect to override the noclobber
option. In the following example the forced redirection operator prevents any error messages. $ set noclobber $ echo "hello" >|hello.txt $ cat hello.txt hello $ echo "bye" >|hello.txt $ cat hello.txt bye unset noclobber
Redirection is frequently used for jobs that run for a long period of time, or for jobs that produce a lot of output. For such jobs, redirection can capture the results in a file. When this is done, it's also necessary to capture any output errors. Remember that if you redirect standard output but not standard error, output will go to a file and error messages will still go to your screen. The following
find
command will save the results to found.txt
, although errors still appear on the screen. $ find / -name *.txt -exec ls -l {} \; >found.txt find: /some/directory: Permission denied find: /another/one: Permission denied $
$ find / -name *.txt -exec ls -l {} \; 1>found.txt $
$ find / -name *.txt -exec ls -l {} \; 1>found.txt $ find / -name *.txt -exec ls -l {} \; >found.txt $
/dev/tty
, which is the device name for your terminal. The stdin file is assigned to the keyboard of your terminal, while stdout and stderr are assigned to the screen of your terminal. The output redirection operator defaults to 1; thus >
and 1>
are equivalent. The input redirection operators <
and <0
are equivalent. Redirecting standard error, file descriptor 2, requires that its number be explicitly included in the redirection symbol. 1>
to redirect standard output because it helps clarify how the redirection works. When reviewing these examples remember that >
and 1>
are the same. $ find / -name *.txt -exec ls -l {} \; 1>found.txt 2>errors.txt $
>&
redirection operator. In the following example, the standard output of find
is redirected to the file result.txt
. The 2>&1
redirection command instructs the shell to attach the output from standard error (2) to the output of standard output (1). Now both standard output and standard error are sent to result.txt
. $ find / -name *.txt -exec ls -l {} \; 1>result.txt 2>&1 $
result.txt
. This redirection doesn't drag file descriptor 2 along with it, so standard error is left pointing to the terminal device. $ find / -name *.txt -exec ls -l {} \; 2>&1 1>result.txt find: /some/directory: Permission denied find: /another/one: Permission denied $
Perhaps one of the most useful forms of redirection is redirecting input from a
here
document. A shell script can be written that executes a command and serves all input to the command. This is frequently used for a command that is normally run interactively. As an extreme example, I will show you how to do this with the editor vi. I am using vi for two reasons: first, it's interactive, and second, you're probably fairly familiar with it already and so will have a better understanding of what the script's doing. Normally, hands-off editing is done with the sed
command. hello
strings in it, as in the following example, then name it hello.txt
. sample hello.txt hello world hello broadway hello dolly
here.sh
that contains the lines in the example below. The second line starts the vi editor on the hello.txt
file and the <<END-OF-INPUT
option states that vi will run taking its input from this current file, here.sh
, reading in a line at a time until a single line containing END-OF-INPUT
is read in. The subsequent lines are vi commands to globally search for hello
, replace each instance of it with bye
, write the file back out, then quit. The next line is the END-OF-INPUT
line and final echo statement to indicate that the editing is complete. # here.sh - sample here document vi hello.txt <<END-OF-INPUT :g/hello/s//bye/g :w :q! END-OF-INPUT echo "Editing complete"
$ chmod a+x here.sh
here.sh
script, you may receive a warning from vi that it's not running in interactive mode. Next, the actual editing takes place; afterwards, you can cat
out the hello.txt
file and see your handiwork. $ ./here.sh Vim: Warning: Input is not from a terminal Editing complete $ cat hello.txt sample bye.txt bye world bye broadway bye dolly
/dev/null
device, as in the following version of here.sh
: # here.sh - sample here document vi hello.txt 2>/dev/null <<END-OF-INPUT :g/hello/s//bye/g :w :q! END-OF-INPUT echo "Editing complete"
here
documents frequently appear as small pieces of larger scripts. In order to make the here
portion stand out, it's helpful to indent that section of the shell. Using a minus (-
) in front of the end-of-input marker eats the white spaces at the beginning of a line and prevents them from being passed on to the program. The following is an example: # here.sh - sample here document vi hello.txt 2>/dev/null <<-STOP-HERE :g/hello/s//bye/g :w :q! STOP-HERE echo "Editing complete"
ftp
utility is a common candidate for here
document status. The following example starts ftp
and redirects standard output and standard error to xfr.log
. The process logs in to a remote system named nj_system
, switches to binary transfer mode, creates two directories, transfers a file named newstuff.a
to the remote system, and signs out again. Using a here
document makes it possible to execute ftp
through a shell script while seeing what the script is doing. The second example below is another method of doing this, but it involves a separate file with the ftp
commands. # xfr.sh - Transfers to a remote system district=nj ftplog=xfr.log insbase=/usr/installations insdir=$insbase/new inskit=newstuff.a echo "Transferring to" $district ftp 1>>$ftplog 2>&1 $district"_system" <<-ALL-DONE user mo ddd789 binary mkdir $insbase chmod 777 $insbase mkdir "$insdir" chmod 777 $insdir put $inskit $insdir/$inskit chmod 777 $insdir/$inskit bye ALL-DONE echo "Transfer to" $district "complete."
ftp
, and couldn't take advantage of script variables. Here's a sample input for ftp
: user mo ddd789 binary mkdir /usr/installations chmod 777 /usr/installations mkdir /usr/installations/new chmod 777 /usr/installations/new put newstuff.a /usr/installations/new/newstuff.a chmod 777 /usr/installations/new /newstuff.a bye # xfr.sh - Transfers to a remote system district=nj ftplog=xfr.log echo "Transferring to" $district ftp 1>>$ftplog 2>&1 $district"_system" <ftp_commands echo "Transfer to" $district "complete."
Posted by Appleman at 5:06 AM 0 comments
Labels: Education, Engineering, Tips N tricks, UNIX
Understanding Unix shells and environment variables Part 1
Unix shells and environment variables: Read the whole series! | |
---|---|
|
doecho
script example, $PLACE
is set in the first line and picked up in the second line by the built-in echo
command. doecho
. Change the mode using chmod a+x doecho
: # doecho sample variable PLACE=Hollywood echo "doecho says Hello " $PLACE
./command
to execute a shell script in the current directory. You don't need to do this if your $PATH
variable contains the .
as one of the searched directories. The ./command
method works for scripts in your current directory, even if the current directory isn't included on your path. $ ./doecho doecho says Hello Hollywood $
$PLACE
is a shell variable. echoplace
and change its mode to executable. # echoplace echo $PLACE variable echo "echoplace says Hello " $PLACE
doecho
to execute echoplace
as its last step. # doecho sample variable PLACE=Hollywood echo "doecho says Hello " $PLACE ./echoplace
doecho
script. The output is a bit surprising. $ ./doecho doecho says Hello Hollywood echoplace says Hello $
echoplace
is run as the last command of doecho
. It tries to echo the $PLACE
variable but comes up blank. Say goodbye to Hollywood. echo
), an executable program (like vi or grep
), a user-defined function, or an executable shell script. If it's any of the first three, it directly executes the command, function, or program; but if the command is an executable shell script, the shell spawns another running copy of itself -- a child shell. The spawned child shell uses the shell script as an input file and reads it in line by line as commands to execute. ./doecho
to execute the doecho
script, you're actually executing a command that is something like one of the following, depending on which shell you're using. (See the Resources section at the end of this column for more information on redirection.) $ sh < ./doecho (or) $ ksh <./doecho
doecho
and begins reading commands from that file. It performs the same test on each command, looking for built-in commands, functions, programs, or shell scripts. Each time a shell script is encountered, another copy of the shell is spawned. doecho
so you can follow it through the steps described below. The output of doecho
is repeated here, with extra spacing and notes. $ ./doecho <-the command typed in shell one launches shell two reading doecho. doecho says Hello Hollywood <-shell two sets $PLACE and echoes Shell three starts echoplace echoplace says Hello <-shell three cannot find $PLACE and echoes a blank $ <-shells three and two exit. Back at shell one
./doecho
. Shell two is started as a child of shell one. Its job is to read and execute doecho
. The doecho
script is repeated below. doecho
creates the shell variable $PLACE
and assigns the value "Hollywood" to it. At this point, the $PLACE
variable only exists with this assignment inside shell two. The echo
command on the next line will print out doecho says Hello Hollywood
and move on to the last line. Shell two reads in the line containing ./echoplace
and recognizes this as a shell script. Shell two launches shell three as a child process, and shell three begins reading the commands in echoplace
. # doecho sample variable PLACE=Hollywood echo "doecho says Hello " $PLACE ./echoplace
echoplace
is a repeat of the echoed message. However, $PLACE
only exists with the value "Hollywood" in shell two. Shell three sees the line to echo echoplace says Hello
and the $PLACE
variable, and cannot find any value for $PLACE
. Shell three creates its own local variable named $PLACE
as an empty variable. When it echoes the script, it's empty and prints nothing. # echoplace echo $PLACE variable echo "echoplace says Hello " $PLACE
$PLACE
in shell two is only available inside shell two. If you type in a final command in shell one to echo $PLACE
at the shell one level, you'll find that $PLACE
is also blank in shell one. $ echo "shell one says Hello " $PLACE shell one says Hello $
export
in the Bourne and Korn shells. $ PLACE=Hollywood; export PLACE $
$ export PLACE=Hollywood $
set
, then assign an environment variable using setenv
. Note that setenv
doesn't use the =
operator. > set PLACE=Hollywood > setenv PLACE Hollywood
doecho
script and edit it to export the $PLACE
variable, it becomes available in shell two (the publishing shell) and shell three (the child shell). # doecho sample variable PLACE=Hollywood; export PLACE echo "doecho says Hello " $PLACE ./echoplace
doecho
is run, the output is changed. This happens because in shell three $PLACE
is found as an environment variable that has been exported from shell two. $ ./doecho doecho says Hello Hollywood echoplace says Hello Hollywood $
$PLACE
before you run doecho
will help you verify its scope. After doecho
is complete, echo the value of $PLACE
at the shell one level. Notice that doecho
in shell two and echoplace
in shell three both see $PLACE
's value as "Hollywood", but the top-level shell sees the value "Burbank". This is because $PLACE
was exported in shell two. The environment variable $PLACE
has scope in shell two and shell three, but not in shell one. Shell one creates its own local shell variable named $PLACE
that is unaffected by shells two and three. $ PLACE=Burbank $ ./doecho doecho says Hello Hollywood echoplace says Hello Hollywood $ echo "shell one says Hello " $PLACE $ shell one says Hello Burbank $
doecho
by adding a repeat of the echo
line after the return from echoplace
. # doecho sample variable PLACE=Hollywood echo "doecho says Hello " $PLACE ./echoplace echo "doecho says Hello " $PLACE
echoplace
to change the value of $PLACE
. Once this is done, echo it again. # echoplace echo $PLACE variable echo "echoplace says Hello " $PLACE PLACE=Pasadena echo "echoplace says Hello " $PLACE
$PLACE
, a change that appears in shell three -- and in shell two, even after it returns from echoplace
. Once a variable is published to the environment, it's fair game to any shell at or below the publishing level. $ PLACE=Burbank $ ./doecho doecho says Hello Hollywood echoplace says Hello Hollywood echoplace says Hello Pasadena doecho says Hello Pasadena $ echo "shell one says Hello " $PLACE $ shell one says Hello Burbank $
doecho
by starting it with a dot and a space, then echo the value of $PLACE
when doecho
is complete. In this example, shell one recognizes $PLACE
as having been given the value "Pasadena". $ . ./doecho doecho says Hello Hollywood echoplace says Hello Hollywood echoplace says Hello Pasadena doecho says Hello Pasadena $ echo "shell one says Hello " $PLACE $ shell one says Hello Pasadena $
. ./doecho
, shell one doesn't spawn a child shell, but instead switches gears and begins reading from doecho
. The doecho script initializes and exports the $PLACE
variable. The export of $PLACE
now affects all shells because you exported it at the shell one level. .profile
. Suppose for instance that you have a specialized task that you do only on certain days, and that you need to set up some special environment variables for it. Place these variables in a file named specvars
. # specvars contains special variables for the # special task that I do sometimes WORKDIR=/some/dir SPECIALVAR="Bandy Legs" REPETITIONS=52 export WORKDIR SPECIALVAR REPETITIONS
specvars
file, you won't get the expected effect because a subshell, shell two, is created to execute specvars
and the export
command exports to shell two and below. Shell one doesn't view these exports as environment variables. $ specvars $ echo "WORKDIR IS " $WORKDIR WORKDIR is $
$ . specvars $ echo "WORKDIR IS " $WORKDIR WORKDIR is /some/dir $
printenv
command; a list of all variables that are available to the current shell, including all child shells, is printed out. Posted by Appleman at 5:05 AM 0 comments
Labels: Education, Engineering, Tutorials, UNIX
Understanding Unix shells and environment variables, Part 2
Examine and customize your Unix environment
Unix shells come with variables that are used by the shell or related commands. In addition to variables that you create, the shell itself requires or takes advantage of variables that can be set up for it. When you first log in to a Unix system, the/etc/passwd
file contains the name of the shell that is to be run for you. This appears in the last field of the password file. To see yours, type cat /etc/passwd
and pipe the result through grep
looking for your userid. In the example below I have used my id, mjb
. $ cat /etc/passwd|grep mjb mjb:500:500::/home/mjb:/bin/ksh
Unix shells and environment variables: Read the whole series! | |
---|---|
|
/etc/profile
, which a system administrator has programmed for basic setup actions required for all users. After I execute /etc/profile
, I execute $HOME/.profile
. This is set up to contain my own environment. Both /etc/profile
and $HOME/.profile
set environment variables. The Bourne shell works in a similar fashion. The C shell also takes a similar approach, but uses more files. It runs /etc/csh.cshrc
, then /etc/csh.login
, then an entire raft of files in your home directory, such as ~/.cshrc
, ~/.history
, ~/.login
, and, finally, ~/.cshdirs
. Regardless of the approach, the result is an environment in which the user will run, including environment variables. You can see your environment variables by using
printenv
or env
. The following is a short example of the output. $ printenv USERNAME= HISTSIZE=1000 HOSTNAME=my.system.com LOGNAME=mjb MAIL=/var/spool/mail/mjb TERM=xterm PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin/X11:/home/mjb/bin HOME=/home/mjb SHELL=/bin/ksh PS1=[\u@\h \W]\$Shells also use variables that are not part of the environment. For a description of the difference between shell and environment variables, see last month's column.
For example,
PS1
, listed above as an environment variable, is the prompt displayed on the screen when the shell is waiting for a new command. Another shell variable, PS2
, contains the prompt to be used when a command is begun but not completed before Enter is pressed. To see the prompt in use, type the commands below. The first echoes the $PS2
prompt to the screen. Then a new command is started with an opening parenthesis. The user presses Enter immediately and the shell waits for a command and a closing parenthesis. The shell displays the >
prompt to indicate that it is waiting for more input. The command is entered and Enter is pressed. Once again, the >
prompt is displayed, because the user has not yet closed the open parenthesis. Finally, the user types )
and presses Enter, ending the command. $ echo $PS2 > $ ( > cat /etc/passwd|grep mjb > ) $You can create a more graphic version of this by adding a command to change the
$PS2
prompt. In the following example, the value of the $PS2
prompt is changed and the same command sequence is entered. The $PS2
prompt is reset. $ echo $PS2 > $ PS2="more please> " $ ( more please > cat /etc/passwd|grep mjb more please > ) $ PS2="> " # echo $PS2 > $Why does the
PS2
prompt have a value if it is not in the environment? Look at the printenv
listing and you will not see an entry for PS2
. $ printenv USERNAME= HISTSIZE=1000 HOSTNAME=my.system.com LOGNAME=mjb MAIL=/var/spool/mail/mjb TERM=xterm PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin/X11:/home/mjb/bin HOME=/home/mjb SHELL=/bin/ksh PS1=[\u@\h \W]\$The shell sets up some default shell variables;
PS2
is one of them. Other useful shell variables that are set or used in the Korn shell are: _
(underscore) -- When an external command is executed by the shell, this is set in the environment of the new process to the path of the executed command. In interactive use, this parameter is also set in the parent shell to the last word of the previous command.COLUMNS
-- The number of columns on the terminal or window.ENV
-- If this parameter is found to be set after any profile files are executed, the expanded value is used as a shell startup file. It typically contains function and alias definitions.ERRNO
-- Integer value of the shell'serrno
variable -- this indicates the reason the last system call failed.HISTFILE
-- The name of the file used to store history. When assigned, history is loaded from the specified file. Multiple invocations of a shell running on the same machine will share history if theirHISTFILE
parameters all point to the same file. IfHISTFILE
isn't set, the default history file is$HOME/.sh_history
.HISTSIZE
-- The number of commands normally stored in the history file. Default value is 128.IFS
-- Internal field separator, used during substitution and by the read command to split values into distinct arguments; normally set to space, tab, and newline.LINENO
-- The line number of the function or shell script that is being executed. This variable is useful for debugging shell scripts. Just add anecho $LINENO
at various points and you should be able to determine your location within a script.LINES
-- Set to the number of lines on the terminal or window.PPID
-- The process ID of the shell's parent. A read-only variable.PATH
-- A colon-separated list of directories that are searched when seeking commands.PS1
-- The primary prompt for interactive shells.PS2
-- Secondary prompt string; default value is>
. Used when more input is needed to complete a command.PWD
-- The current working directory. This may be unset or null if shell does not know where it is.RANDOM
-- A simple random number generator. Every timeRANDOM
is referenced, it is assigned the next number in a random number series. The point in the series can be set by assigning a number toRANDOM
.REPLY
-- Default parameter for the read command if no names are given.SECONDS
-- The number of seconds since the shell started or, if the parameter has been assigned an integer value, the number of seconds since the assignment plus the value that was assigned.TMOUT
-- If set to a positive integer in an interactive shell, it specifies the maximum number of seconds the shell will wait for input after printing the primary prompt (PS1
). If this time is exceeded, the shell exits.TMPDIR
-- Where the directory shell temporary files are created. If this parameter is not set, or does not contain the absolute path of a directory, temporary files are created in/tmp
.
prompt1
, prompt2
, path
, home
, and so on. Other interesting variables are the locale setting variables. These variables are
LC_ALL
, LC_CTYPE
, LC_COLLATE
, and LC_MESSAGES
. LC_ALL
effectively overrides the values for the other three LC variables; you can set them independently by not setting LC_ALL
. LC_ALL
-- Determines the locale to be used to override any previously set values.LC_COLLATE
-- Defines the collating sequence to use when sorting.LC_CTYPE
-- Determines the locale for the interpretation of a sequence of bytes.LC_MESSAGES
-- Determines the language in which messages should be written.
LC_ALL
can be used to change the language for the system. Try the following sequence of commands below to see these in action. The language is changed to French (fr
) and grep
is invoked with an illegal option -x
. The error message appears in French. The LC_ALL
is set to Spanish (español, thus es
) and the error and error message are repeated. Finally LC_ALL
is unset and the error returns in English. $ export LC_ALL=fr $ grep -x Usage: grep [OPTION]...PATRON [FICHIER] Pour en savoir davantage, faites: 'grep --help' $ LC_ALL=es $ grep -x Modo de empoleo: grep [OPCION]...PATRON [FICHERO] Pruebe 'grep --help' para mas informacion $ unset LC_ALL $ grep -x Usage: grep [OPTION]...PATTERN [FILE] Try 'grep --help' for more information. $
End of article.
Posted by Appleman at 5:04 AM 0 comments
Labels: Education, Engineering, Tutorials, UNIX
The language of shells
Making sense of shell commands
Summary
Working with shells can be difficult, as they require unusual and specific combinations of words and punctuation. This month, Mo Budlong helps you out by explaining some basic commands, such asls
,echo
, andman
. Also, Mo corrects a problem from May's Unix 101 in a sidebar. (1,300 words)
- Accept a command
- Interpret the command
- Execute the command
- Wait for another command
OK, Hal. Sort out my correspondence, throw out anything that is too old, and archive the rest.
echo
command is almost always built in to a shell. $ echo "Hello, Hal" Hello Hal $
command name [-options] [arguments]
ls -l /home/mjb
ls
command is usually a separate program rather than a built-in command. The command above will get you a long listing of the contents of the /home/mjb
directory. In this example, ls
is the command name, -l
is an option that tells ls
to create a long, detailed output, and /home/mjb
is an argument naming the directory that ls
is to list. sh
(the Bourne shell), ksh
(the Korn shell), csh
(the C shell), bash
, (the Bourne Again shell), pdksh
(the Public Domain Korn shell), and tcsh
(the Tiny C shell). echo $SHELL
/ < > ! $ % ^ & * | { } ~
and ;
. When naming files and directories on Unix, it is safest to only use numerals, upper and lower case letters, and the period, dash, and underscore characters. ;
-- Separates multiple commands on a command line&
-- Causes the preceding command to execute asynchronously (as its own separate process so that the next one does not wait for it to complete)()
-- Enclose commands that are to be launched in a separate shell|
-- Pipes the output of the command to the left of the pipe to the input of the command on the right of the pipe>
-- Redirects output to a file or device>>
-- Redirects output to a file or device and appends to it instead of overwriting it<
-- Redirects input from a file or devicenewline
-- Ends a command or set of commandsspace
-- Separates command wordstab
-- Separates command words
||
, &&
, and >>
. With these metacharacters you can define a command-line word, which is a sequence of characters separated by one or more nonquoted metacharacters. man
command, followed by the name of the command you need help with. For instance, to see the manual for the ls
command, enter: man ls
End of article.
A note to my readers |
I would like to note a correction to the May edition of Unix 101, in which I said: "Once a shell variable has been exported and becomes an environment variable, it can be modified by a subshell. The modification affects the environment variable at all levels where the environment variable has scope." Several sharp eyed readers picked up on this and sent comments ranging from, "Oh, no, you can't" to "Gee, whiz, which shell are you using? It doesn't work for me." They are right. A subshell cannot modify an environment variable and return it to the parent. It can modify an environment variable and pass it on to a child process, but it cannot return the new value to a higher level. To illustrate this correctly, create the following three script files and grant them execute privileges using chmod a+x script* . # script1 myvar="Hello" ; export myvar echo "script1:myvar=" $myvar ./script2 echo "Back from script1 and script2 echo "script1:myvar=" $myvar # script2 myvar="Goodbye" echo "script2:myvar=" $myvar ./script3 # script3 echo "script3:myvar=" $myvarIf you run this sequence, the results show that $myvar exists in all three scripts (and, consequently, in all three processes), but modifying it in script2 only affects its value in script3 . $ ./script1 script1:myvar= Hello script2:myvar= Goodbye script3:myvar= Goodbye Back from script 1 and 2 script1:myvar= Hello $My apologies to those of you who tried to make the example in the May issue work. |
Posted by Appleman at 5:03 AM 0 comments
Labels: Education, Engineering, Tips N tricks, UNIX
Using cron basics
Utility helps you get your timing right
Summary
Cron allows you to program jobs to be performed at specific times or at steady intervals. This month Mo Budlong explains some cron fundamentals and runs an experiment. (1,500 words)
ps
and grep
to locate the process. ps -ef|grep cron root 387 1 0 Jun 29 ? 00:00:00 crond root 32304 20607 0 00:18 pts/0 00:00:00 grep cron
grep cron
command used to locate crond. - Any files in
/var/spool/cron
or/var/spool/cron/crontabs
. Those are individual files created by any user using the cron facility. Each file is given the name of the user. You will almost always find a root file in/var spool/cron/root
. If the user account named jinx is using cron, you will also find a jinx file as/var/spool/cron/jinx
.
ls -l /var/spool/cron -rw------- 1 root root 3768 Jul 14 23:54 root -rw------- 1 root group 207 Jul 15 22:18 jinx
- A cron file that may be named
/etc/crontab
. That is the traditional name of the original cron table file. - Any files in the
/etc/cron.d
directory.
/var/spool/cron
file for your account. /var/spool/cron
. The crontab program knows where the files that need to be edited are, which makes things much easier on you. -l
, -r
, and -e
. The -l
option lists the contents of the current table file for your current userid, the -e
option lets you edit the table file, and the -r
option removes a table file. - Minute of the hour in which to run (0-59)
- Hour of the day in which to run (0-23)
- Day of the month (0-31)
- Month of the year in which to run (1-12)
- Day of the week in which to run (0-6) (0=Sunday)
- The command to execute
- A number in the specified range
- A range of numbers in the specified range; for example,
2-10
- A comma-separated list consisting of individual numbers or ranges of numbers, as in
1,2,3-7,8
- An asterisk that stands for all valid values
-l
command. The following example includes line numbers to clarify the explanation. 1 $ crontab -l 2 # DO NOT EDIT THIS FILE 3 # installed Sat Jul 15 4 #min hr day mon weekday command 6 30 * * * * some_command 7 15,45 1-3 * * * another_command 8 25 1 * * 0 sunday_job 9 45 3 1 * * monthly_report 10 * 15 * * * too_often 11 0 15 * * 1-5 better_job $
some_command
at 30 minutes past the hour. Note that the fields for hour, day, month, and weekday were all left with the asterisk; therefore some_command
runs at 30 minutes past the hour, every hour of every day. another_command
at 15 and 45 minutes past the hour for hours 1 through 3, namely, 1:15, 1:45, 2:15, 2:45, 3:15, and 3:45 a.m. sunday_job
is to be run at 1:25 a.m., only on Sundays. monthly_report
at 3:45 a.m. of the first day of each month. $crontab -e 0-59 * * * * echo `date` "Hello" >>$HOME/junk.txt $
"Hello"
, and also the command to append the result to a file in my home directory, which is named junk.txt
. -l
to view the file. $ crontab -l # DO NOT EDIT THIS FILE # installed Sat Jul 15 0-59 * * * * echo `date` "Hello" >>$HOME/junk.txt $
touch
command to create junk.txt
in case it does not exist, and then use tail -f
to open the file and display the contents line by line as they are inserted by cron. $ cd $ touch junk.txt $ tail -f junk.txt Sat Jul 15 15:23:07 PDT Hello Sat Jul 15 15:24:07 PDT Hello Sat Jul 15 15:25:07 PDT Hello Sat Jul 15 15:26:07 PDT Hello
junk.txt
. -e
option to open the cron table file and remove the line you just created. out
or to standard error
that has not been redirected to a file, as in the example just tested. The trapped output is dropped into a mail file and is sent either to the user who originated the command or to root. Either way, it conveniently traps errors without forcing cron to blow up or abort. Traveling down the Unix $PATH
Why are some commands executed, and some ./executed?
Summary
Why do some commands need a dot-slash in order to run? In this month's Unix 101 column, Mo Budlong explores the answer to this question, and explains the difference between built-in and executable commands. (1,200 words)
./executed
? In other words, why do some commands need a dot-slash in front of them to run? Rather than giving you a short answer, I am going to explore a couple of things and hope you find them enlightening. $ newscript $ ./newscript $
echo
, read
, and export
. sh
, ksh
, csh
, or perl
, or compiled executables, such as a program written in C and compiled down to a binary. ksh
also break down into these two main categories, because the command is translated and then issued as either a builtin or an executable. The following examples create aliases for the builtin echo
and the executable grep
. $ alias sayit='echo ' $ alias g='grep '
type
command to verify the nature of echo
. $ type echo echo is a shell builtin $
type
command to check on grep
. type
will give you the directory that contains the executable grep
program. $ type grep grep is /bin/grep $
grep
as a command or ask for its location using type
, the operating system finds grep
by using the $PATH
environment variable. If type
can find grep
, then echoing out the $PATH
variable will verify that the path to the directory containing grep
is part of the $PATH
variable. $ type grep grep is /bin/grep $ echo $PATH /bin:/usr/bin:/usr/local/bin:/home/mjb/bin $
$PATH
are separated by colons. The above example includes /bin
, /usr/bin
, /usr/local/bin
, and /home/mjb/bin
. As an aside, the type
command is probably a builtin. $ type type type is a shell builtin $
type
is whereis
, which will usually locate a command and its manual entry. $ whereis grep grep: /bin/grep /usr/man/man1/grep.1 $
- Accept a command
- Interpret the command
- Execute the command
- Wait for another command
$PATH
. If it can't be found in one of these path directories, an error results. $ zowie zowie: command not found $
$PATH
variable. This is important to understand, especially if you came to Unix from an MS-DOS background. MS-DOS uses a PATH
variable as well, but it searches the user's the current directory before it searches in any directories in the user's PATH
. $PATH
variable. This will appear as a single dot, the Unix shorthand for current directory. Note the dot at the end of the $PATH
variable below. $ echo $PATH /bin:/usr/bin:/usr/local/bin:/home/mjb/bin:.
$PATH
variable, create a new directory under your home directory, such as $HOME/temp
, and change to it. $ cd $HOME $ mkdir temp $cd temp $
vi
editor to create a simple script. # sayhello echo "Hello"
$ chmod a+x sayhello $
$PATH
variable, you'll be able to execute the command directly. $ sayhello Hello $
$PATH
variable, the computer will search through your $PATH
(anywhere but the current directory) and report failure. $ sayhello sayhello: command not found $
sayhello
, the computer searches for it. However, if you apply any additional path information to the command, the shell assumes that you are giving an absolute path and only looks where you tell it to. Consequently, ./sayhello
locates the command in the current directory. $ ./sayhello Hello $
$PATH
variable, because the dot-slash precludes the shell's search for the command. $PATH
if you don't have one, you need to edit your personal startup profile, usually called .profile
and located in your $HOME
directory. Look for a line that exports the PATH
variable, such as line 5 below. (Line numbers are included here for easy reference, but are not part of the file.) This file already has a line 4 that includes some local additions to the default $PATH
. 1. # .profile 2. # User specified environment 3. USERNAME="mjb" 4. PATH=$PATH:$HOME/bin 5. export USERNAME PATH
PATH=$PATH:.
PATH=$PATH:$HOME/bin:.
$PATH
for commands. $PATH
, including the current directory, you must specify a path to locate the command. This includes a ./
for the current directory. Posted by Appleman at 4:58 AM 0 comments
Labels: Education, Engineering, Tutorials, UNIX