Friday, March 9, 2012

PostgreSQL

POSTGRESQL

aptitude search postgresql

You have to install server type & latest version of SQL server.

Run postgresql command

apt-get install postgresql name ( package name with version )
                       or
aptitude install package name

After that we check postgresql is working or not
"psql : FATAL: role "root" does not exits

# psql postgres

# su - postgres

# pwd to check path of file.
    /var/lib/postgresql

#postgres@common-...:~$ su - postgres
#createuser -U postgres -P

Connect to database or access any database

#psql postgres ( database name )
postgres =>  [ postgres is bydefault database create when we install postgresql ]

To see database "\l"
To quit to database "\q"
Create database in postgresql
#createdb data1 ( database name ) ---> This command run in user mode.

postgres => CREATE DATABASE data1;
postgres => \l
Stop any process in sql ";"

Connect to other database
psqlgres => \c [ database name ] data1

Alt + F1 : open application menu
Alt + F2 : Display run application dialog
print screen : Take a screen shot of the entire desktop
Alt + print screen : Take a screen shot of the current open windows 
Ctrl + Alt + Arrow : switch to the work space to the specified direction of
                                 the current workspace.
Ctrl + Alt + D :  Minimize all windows & give focus to the desktop
Alt + tab : Switch between windows. A list of windows that you can select is display id. Release the keys to select a windows. You can press the "shift" key to cycle through the windows in reverse order.
Ctrl + Alt + Tab : Switch the focus between the panels and the desktop. A list of                              items that you can select it display. Release the keys to select
                             on item. you can press the "shift" key to cycle through
                             the  items in reverse order.

Wednesday, March 7, 2012

PS command

ps command 

use for check which process is run or not.

#ps -ef ---> show you all process running on computer
# ps -ef | grep -i process name ----> grep any process you want to check or kill.

If you want to set any value to any file then we run .

echo [ 1 or 0 ] > /proc/sys/net/ipv4/ip_forward

echo is command
1 or 0 is value
/proc/sys/net/ipv4/ip_forward --> file path

APT ( Advanced Packaging Tool )

APT ( Advanced Packaging Tool )

apt-get   -------> APT package handling tool
apt-get install (package name) ----------> To install any software and package


apt-get install firefox


apt-get autoremove package name
apt-get remove package name --------------> To remove package

* aptitude :-> high-level interface to the package manager This command use
                      for.

                    install any software or find any software.

aptitude search ( package name ) ----> search any package
aptitude install ( package name )  -----> To install any package
aptitude remove ( package name ) ----> To remove any package


If you want to install any binary package like java in a binary formate then give package execute permission. After that [ ./java ] is command.
./java

Check java install or not :

dpkg -query -l | grep java

This command show you that the java install or not check java package install or not.

env : run a program in a modified environment
         This command run for to check environment or path is correct or not.

export JAVA_HOME = /opt/java
export = command
JAVA_HOME = variable
/opt/java = file location

export JAVA_HOME = /opt/java
export ANT_HOME = /opt/ant  ----> [ ANT is a compiler ]
echo $ JAVA_HOME
echo $ANT_HOME -----> To check variable export or not.

vim /etc/profile ----> insert line

export JAVA_HOME = /opt/java
export ANT_HOME = /opt/ant
export PATH = $PATH:${JAVA_HOME}/bin:${ANT_HOME}/bin ---> restart pc

But if you don't want to reboot pc then run
   [ source /etc/profile ]  -----> command use for execute variable .





 

Tuesday, March 6, 2012

Proc File System

Understanding the Proc File System

  • The Linux kernel provides a mechanism to access its underlying internal data-structure and also to change its kernel setting at run-time through /proc file system. We will be discussing the /proc file system here targeted to the Intel x86 architecture; though the basic concepts will remain the same for linux on any plarform.

/proc - a Virtual File System:

The /proc file system is a mechanism that is used for the kernel and kernel modules to send information to processes ( hence the name /proc ). This pseudo file system allows you to interact with the internal data-structure of the kernel,bet useful information about the processes, and to change settings ( by modifying the kernel parameters ) on the fly. /proc is stored in memory, unlike other file-systems, which are stored on disk. If you look at the file /proc/mount ( which lists all the mounted file systems, like "mount command), you should see a line in it like:

gerp proc /proc/mounts
/proc /proc proc rw 0 0

/proc  is controlled by the kernel and does not have an underlying device. Because it contains mainly state information controlled by the kernel, the most logical place to store the information is in memory controlled by the kernel. Doing a 'ls -l'
on /proc reveals that most of the files are 0 bytes in size; yet when the file is viewed, quite a bit of information is seen. How is this possible ? This happens because the /proc file-system like any other regular file-system registers itself to the Virtual File System layer ( VFS ). However when VFS make calls to it requesting i-nodes for files/directories the /proc file system creates those files/directories from information within the kernel.

Mounting the proc file system:

If already not mounted on your system, proc file system can be mounted on your system by running the following command -

mount -t proc proc /proc

The above command should successfully mount your proc file system. Please read the mount man page for more details.

Viewing the /proc files:

/proc files can be used to access information about the state of the kernel, the attributes of the machine the state of the running processes etc. Most of the files in the /proc directory provide the latest glimpse of a system's physical environment. Although these /proc files are virtual yet they can be viewed using file the file is created on  the fly from information within the kernel. Here are some interesting results which i got on my system.

$ ls -l /proc/cpuinfo   
-r--r--r--  1 root root 0 2012-03-06 11:47 /proc/cpuinfo

$ file /proc/cpuinfo
/proc/cpuinfo: empty

processor    : 0
vendor_id    : GenuineIntel
cpu family    : 15
model        : 4
model name    :                   Intel(R) Xeon(TM) CPU 3.40GHz
stepping    : 1
cpu MHz        : 3391.625
cache size    : 1024 KB
physical id    : 0
siblings    : 2
core id        : 0
cpu cores    : 1
fpu        : yes
fpu_exception    : yes
cpuid level    : 5
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl est cid cx16 xtpr
bogomips    : 6792.84
clflush size    : 64
cache_alignment    : 128
address sizes    : 36 bits physical, 48 bits virtual
power management:

processor    : 1
vendor_id    : GenuineIntel
cpu family    : 15
model        : 4
model name    :                   Intel(R) Xeon(TM) CPU 3.40GHz
stepping    : 1
cpu MHz        : 3391.625


ETC..

This is the result for a five-CPU machine. Most of the information above is self-explanatory and gives useful hardware information about the system. Some of the information in /proc files is encoded and various utilities are built that interpret this encoded information and output it in a human readable format. Some of these utilities are: 'top', 'ps', 'apm' etc.

Getting Useful system/kernel information:

The Proc File System can be used to gather useful information about the system and the running kernel. Some of the important files are listed below:

  • /proc/cpuinfo - information about the CPU ( model, family, cache size etc.)
  • /proc/meminfo - information about the physical RAM, Swap space etc.
  • /proc/mounts - list of mounted file system.
  • /proc/devices - list of available devices
  • /proc/filesystem - supported file system
  • /proc/modules -  list of loaded modules
  • /proc/version - Kernel version
  • /proc/cmdline - Parameters passed to the kernel at the time of starting 

There are much more files in /proc than listed above. An alert reader is expected to do a 'more' on every file in /proc directory or read [1]1 for more information about the files present in /proc directory. I suggest to use 'more' and not 'cat' until you know the filesystem a bit because some files (e.g. kcore) can be very large.

Information about running processes:

The /proc file system can be used to retrieve information about any running process. There are couple of numbered sub-directories inside /proc. Each numbered corresponds to a process id (PID). Thus for each running process, there is a sub-directory inside /proc named by its PID. Inside these sub-directories are files that provide important details about the state and environment of a process. Lets try to search for a running process.


$ps -aux | grep mozilla

The above command shows that there is a running process of mozilla with PID ***.

The file  "cmdline" contains the command invoked to start the process. The "environ" file contains the environment variables for the process. "status" has status information on the process, including the user (UID) and group (GID) identification for the user executing the process the parent process ID (PPID) that instantiated the PID and the current state of the process such as "Sleeping" or "Running". Each process directory also has a couple of symbolic links. "cwd" is a link to the current working directory for the process ,"exe" to the executable program of the running process, "root" is a link to the directory, which the process, sees as its root directory (usually "/"). The directory "fd" contains links to the file description that the process is using. "cpu" entry appears only on SMP linux kernels. It contains a breakdown of process time by CPU.

/proc/self is an interesting sub-directory that makes it easy for a program to use /proc to find information about its own process. The entry /proc/self is a symbolic link to the /proc directory corresponding to the process accessing the /proc directory.

Interacting with kernel via /proc:

Most of the files in /proc discussed above are read-only. However the /proc file system provides provision of interact with kernel via read-write files inside. Writing to these files can change the state of the kernel and therefore changes to these files should be made with caution. The /proc/sys directory is the one that hosts all the read-write files and thus can be used to change the kernel behaviour.


/porc /sys/kernel - This directory contains information that reflects general kernel behaviour. /proc/sys/kernel/{domainname,hostname} hold the domain-name and hostname for the machine/network. These files can be configured to modify these names.

$hostname
$cat /proc/sys/kernel/domainname
$cat /proc/sys/kernel/hostname
$echo "new-machinename" > /proc/sys/kernel/hostname
$hostname

Thus by modifying the file inside /proc file system. we are able to modify the hostname. Lots of other configurable files exists inside /proc/sys/kernel. Again its impossible to list down every file here so readers are expected to go through this directory in detail.
Another configurable directory is /proc/sys/net. Files inside this directory can be modified to change the networking properties of the machine/network. E.g. By simply modifying a file, you can hide your machine in the network.


$echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

This will hide your machine in the network as it disables answers to icmp_echos. The host will not respond to ping queries from other hosts.

$ ping machiname.domainname.com


To trun it back to default behaviour, do
$echo 0> /proc/sys/net/ipv4/icmp_echo_ignore_all

There are lots of other sub-directories in /proc/sys which can be configured to change the kernel properties. See [1],[2] for detailed information.

Conclusion:

The /proc File System provides a file-based interface to the linux internals. It assists in determining the state and configuration of various devices and processes on system. Understanding and applied knowledge of this file-system is therefore the key to making the most out of your linux system.


vi editor

VI Editor

What is vi?

The default editor that comes with the UNIX operating system is called vi ( visual editor ). [Alternate editors for UNIX environments include pico and emacs, a product of GNU.]
 The UNIX vi editor is a full screen editor and has two modes of operation:
  • command mode commands which cause action to be taken on the file, and 
  • Insert mode in which entered text is inserted into the file.

In the command mode, every character typed is a command that does something to the text file being edited; a character typed in the command mode may even cause the vi editor to enter the insert mode. In the insert mode, every character typed is added to the text in the file; pressing the (Escape) key turns off the Insert mode.

While there are a number of vi commands just a handful of these is usually sufficient for beginning vi users. To assist such users, this web page contains a sampling of basic vi commands. The most basic and useful commands are marked with an asterisk (* or star) in the tables below. With practice these commands should become automatic.
NOTE: bot UNIX and vi are case-sensitive. Be suer not to use capital letter in place of a lower-case letter; the results will not be what you expect.

To get into and out of vi :

To start vi
To use vi on the type in vi filename. If the file named filename exists, then the first page ( or screen ) of the file will be displayed; if the does not exist, then an empty file and screen are created into which you may enter text.


*
Vi filename
Edit filename starrting at line 1


Vi -r filename
Recover filename that was being edited when system crashed
 
To Exit vi
Usually the new or modified file is saved when you leave vi. However it is also possible to quit vi without saving the file.
NOTE: The course moves to bottom of screen whenever a colon(:) is typed. This type of command is completed by hitting the (or ) key.

*
:x
Quit vi, writing out modified file to file named in original invocation
A common occurrence in text editing is to replace one word or phase by another. To locate instances of particular sets of characters ( or strings ) use the following commands.

/string
Search forward for occurrence of string in text
?string
Search backward for occurrence of string in text
n
Move to next occurrence of search string
N
Move to next occurrence of search string on opposite direction
Determining Line Number:
Being able to determine the line number of the current line or the total number of lines in the file being edited is sometimes useful.

: .=
Returns line number of current line at bottom of screen
:=
Returns the total number of lines at bottom of screen
^g
Provides the current line number, along with the total number of lines, in the file at the bottom of the screen
Saving and Reading Files:
These commands permit you to input and output files other than the named file with which you are currently working.

:r filename
Read file named filename and insert after current line ( the line with cursor)
:w
Write current contents to file named in original vi call
:w newfile
Write current contents to a new file named newfile
:12, 35w smallfile
Write the contents of the lines numbered 12 through 35 to a new file named smallfile
:w! prevfile
Write current contents over a pre-existing file named prevfile

^f
Move forward one screen
^b
Move backward one screen
^d
Move down (forward) one half screen
^u
Move up (back) one half screen
^l
Redraws the screen
^r
Redraws the screen, removing deleted lines
Adding, Changing and Deleting Text :
Unlike PC editors you cannot replace or delete text by highlighting it with the mouse. Instead use the commands in the following tables.
Perhaps the most important command is the one that allows you to back up and undo your last action. Unfortunately this command acts like a toggle undoing and redoing your most recent action. You cannot go back more than one step.

*
u
Undo whatever you just did; a simple toggle
The main purpose of an editor is to create, add, or modify text for a file.
Inserting or Adding Text:
The following commands allow you to insert and add text. Each of these commands puts the vi editor into insert mode; thus the key must be pressed to terminate the entry of text and to put the vi editor back into command mode.

*
i
Insert text before cursor, until hit


I
Insert text at beginning of current line, until hit
*
a
Append text after curson, until hit


A
Append text to end of current line, until hit
*
o
Open and put text in a new line below current line until hit
*
O
Open and put text in a new line above current line, until hit
Changing Text:
The following commands allow you to modify text.

*
r
Replace single character under cursor (no needed)


R
Replace characters, starting with current cursor position, until hit


cw
Change the current word with new text, starting with the character under cursor, until hit


cNw
Change N words beginning with character under cursor, until ,ESC> hit; e.g. C5w changes 5 word


C
Change (replace) the characters in the current line, until hit


cc
Change (replace) the entire current line, stopping when is hit


Ncc OR cNc
Change (replace) the next N lines, starting with the current line, stopping when is hit
Deleting Text:
The following commands allow you to delete text.

*
x
Delete single character under cursor


Nx
Delete N character, starting with character under cursor


dw
Delete the single word beginning with character under cursor


dNw
Delete N words beginning with character under cursor; e.g. d%w delete 5 words


D
Delete the remainder of the line starting with current cursor position
*
dd
Delete entrie current line


Ndd or dNd
Delete N lines, beginning with the current line; e.g. 5Dd delete 5 lines
   Cutting and Pasting Text:
The following commands allow you to copy and paste text.

yy
Copy (yank, cut) the current line into the buffer
Nyy or yNy
Copy (yank,cut) the next N lines, including the current line, into the buffer
p
Put (paste) the line(s) in the buffer into the text after the current line

Other Commands:
Searching Text:



:wq
Quit vi, writing out modified file to file named in original invocation


:q
Quit (or exit) vi
*
:q!
Quit vi even though latest changes have not been saved for this vi call

Moving the cursor:
Unlike many of the PC and MacIntosh editors, the mouse does not move the cursor within the vi editor screen (or window). You must use the key commands listed below. On some UNIX platforms, the arrow keys may be used as well; however since vi was designed with the Qwerty keyboard ( containing no arrow keys) in mind, the arrow keys sometimes produce strange effects in vi and should be avoided.
If you go back and forth between a PC environment and a UNIX environment, you may find that this dissimilarity in methods for cursor movement is the most frustrating difference between the two.
In the table below the symbol ^ before a letter means that the key should be held down while the letter key is pressed.

*
J or [or down-arrow]
Move cursor down one line
*
K [or up-arrow]
Move cursor up one line
*
H or [ or left-arrow]
Move cursor left one character
*
L or [ or right-arrow]
Move cursor right one character
*
0 (zero)
Move cursor to start of current line ( the one with the cursor)
*
$
Move cursor to end of current line


w
Move cursor to beginning of next word


b
Move cursor back to beginning of preceding word


:0 or lG
Move cursor to first line in file


:n or nG
Move cursor to line n


:$ or G
Move cursor to last line in file

Screen Manipulation:
The following commands allow the vi editor screen ( or windows ) to move up or down several lines and to be refreshed.


















Monday, March 5, 2012

File System Commands

File System Commands

  • Linux look at everything as a file. Linux file system will have tens of thousands of files. If you write a program you add one more file to the system. When you compile it you are adding at least two more.

  • Linux file is a storehouse of information. It is simply a sequence of charaters. linux places no restriction on the structure of the file. A file contains exactly those bytes that you out in to it, be it a source program executable code or anything else. It neither contains it's own size not it's attributes. It doesn't contain it's own name.

  • Although everything is treated as a file by linux it's still divided files into various categories:

                     Ordinary files, device files, directory files etc.

Commands:

pwd: Checking current directory:

  • When you log in you are placed in a specific directory of the file system. This directory is known as your current directory. At any time you should be able to know what your current directory. At any time, you should be able to know what your current directory is. For this we have to use the "pwd" command.

$pwd
/usr/ben

cd: changing directories:

  • We can move around in the file system using cd command. It changes the current directory to the directory specified as the argument. For example:


$pwd
/user/username
$cd progs
$pwd
/user/username/progs

  • The command cd progs means change your subdirectory to progs under the current directory. If we use absolute pathname that also the effect will be the same. You can use like this also
cd /user/username/progs

If we want to switch to the /bin  directory where most of the Unix/Linux commands are kept in that case we have to use the absolute pathname.
If we use cd without ant argument then it reverts to the home directory.

mkdir: Making Directories:

  • We can create directories with mkdir command. This command is followed by the named of the directories to create. If we want to make the directory by the name patch.

$mkdir patch

We can create number of sub directories in a single shot.

$mkdir one two three

Unix/Linux systems lets we create directory chain with just one command. For example:


$mkdir foo foo/one foo/two

This command creates three subdirectories- foo & two subdirectories under foo.
But we can't enter like this-

$mkdir foo/one foo/two foo

This command failed to create two subdirectories one & two but still creates foo
directory.


rmdir: Removing Directories:

  • We can remove directories with rmdir command. For example:

$rmdir patch
Like mkdir, rmdir can also delete more than one directory in a single shot. For example:

$rmdir one two tree
Also the tree directories & subdirectories that we just prepared using mkdir can be removed by using rmdir with reverse set of arguments.

$ rmdir foo/one foo/two foo

But we can't enter like this.
$rmdir foo foo/one foo/two

In this case, rmdir will silently delete the lowest levels of directories one & two. You should remember the following rules while deleting the directories.

  • "you can't delete the directory unless it is empty. In this case foo directory can't be removed  because of the existence of the directories one & two.

  • You can't remove a subdirectory unless you are placed in a directory, which is hierarchically above the one you have chosen to remove.

To understand this look at the following example.

$cd progs
$pwd
/user/useraname/progs
$rmdir progs
rmdir:progs:Directory does not exit.

To remove this directory we must position ourselves in the directory above progs that is username & remove it from there.

$cd /user/username
$pwd
/user/username
$rmdir progs

The mkdir and rmdir commands work with only those directories, which are owned by the user.

ls: Listing Files:

This command is used to list all the files in the current directory.

$ls

We can see here there is a complete list of filenames in the current directory. Arranged in ASCII collating sequence with one filename in each line.
If you want a particular file then just use ls with particular filename.

$ls patch
patch

-x Option:Output In Multiple columns:
When we have several files then it is better to display the filename in multiple columns. So we need the -x option to produce the multi-columnar output.
-F option: Identifying Directories & Executables:
With the help of ls command we got only the filenames, but we didn't know how many of them are directories & executable files. To get this -F option should be used.

$ls -F

Note that the  two symbols that  tag some of the filenames the * indicates the file containing the executable code & the / refers to a directory.

-a Option:Showing Hidden Files:

ls command doesn't show all the files in a directory. Some of files are hidden files in every directory. This option lists all the files even hidden files also.

All these files beginning with a '.' are normally not listed out only with ls.

-r Option: Reversing the sort order:

We can reverse the order of the presentation of the list with the -r option. The sorting is done in the reverse ASCII collating order.

$ls -r

The ls command will work differently if we give the two directory names with it. $ls directory1 directory2
Here is the list of important option:


Options Description
-x Display multi-columnar output
-r Sorts files in reverse order
-R Recursive listing of all files in subdirectories
-F Makes executables with * & directories with /
-a Shows all files including hidden ones
-i Shows inode number of files
-l Long listing showing seven attributes of flies
-d Force listing of directories
-t Sorts files by modification time
-u Sorts files by access time
 
 cat : Displaying and Creating Files:
cat command is generally used to display the contents of a file.
$cat filename
 cat command also accepts more than one filenames
$cat file1 file2
 The contents of second file are shown immediately after the contents of first file. cat is normally used to display the text files only.
Using cat to create file:
cat is used for  creating files. Look at the following examples:
$cat>one
This command shows the inserting of data into the files. ( Ctrl -d)
$ prompt returns
When we terminate the command line after pressing the enter key the prompt vanishes & now waits for input the user side. Finally press the control-d to signify the end of data input.

cp: Copying of files:
Copy the contents of one file to another with the cp command.
syntax
cp [options] old_filename new_filename
you now have two copies of the file, each with identical contents. They are completely independent of each other and you can edit and modify either as needed. They each have their own inode, data blocks and directory table entries.

We can also copy files to another directory. For example
cp one progs/two
cp can also be used to copy more than one file in a single shot. In that case the last filename must be a directory. For example : to copy files one, two, tree to the progs directory we have to write.
$cp one two tree progs
To work this command it is necessary that progs directory should be there.
you can also use * metacharacter if the names having same starting characters. For example: if filename are page1, page2, page3.
cp page* progs

Common Options
-i : interactive ( prompt and wait for confirmation before proceeding)
$cp -i one two
cp:overwrite two?y/n
-r: recursively copy a directory
It is possible to copy entire directory structure with this option. For example the following command copies all files & subdirectories from progs to newprogs.
cp -r progs newprogs

rm: Deleting files:
Remove a file with the rm, remove command.
syntax
rm [option] filename
common options
-i : interactive ( prompt and wait for confirmation before proceeding )
-r : recursively remove a directory, first removing the files and 
      subdirectories beneath it.
-f : don't prompt for confirmation (overrides -i)

Examples:
$rm old_filename
A listing of the directory will now show that the file no longer exists. Actually all you've done is to remove the directory table entry and mark the inode as unused. The file contents are still on the disk, but the system now has no way of identifying those data blocks with a file name. There is no command to "unremove" a file that have been removed in this way. For this reason many novice users alias their remove command to be "rm -i" where the -i option prompts them to answer yes or no before the file is removed.
rm dosen't normally remove directory but removes the files from it. You can remove two files from foo directory without having to "cd" to it.
$rm foo/one/ foo/two
sometimes it is required to delete all the files of directory then we can use the * metacharacter which represents all files. For example:
$rm *

mv: Renaming Files:
Rename a file with the move command, mv.
Syntax:
mv [option] old_filename new_filename

Common Options:
-i interactive (prompt and wait for confirmation before preceeding)
-f don't prompt even when copying over and existing target file ( overrides -i )
Example:
$mv old_filename new_filename
You now have a file called new_filename and the file old_filename is gone. Actually all you've done is to update the directory table entry to give the file a new name. The contents of the file remain where they were.
mv can also be used to rename a directory.

more:Paging Output:
If the file too large to display it's contents on screen, then the contents will scroll off the screen when we view it with cat. So a pager is required more allows the user to view a file one screen at a time. To view the contents of a file page1 use the following command with the filename.
$more page1
You will see the contents on the screen one page at a time. At the bottom of the screen, you will see the filename with the percentage of file content that has been viewed.
more command also contains some internal commands-
q- this command is used to exit more.
f- to scroll forward one screen.
b- to scroll backward one screen.
to move even slowly only one line at a time use, j key for scrolling.
forward and k for scrolling backward.
we can use more with multiple filenames.
more page1 page2 page3
you will first see the contents of the first file preceded by it's name after that the message "page1:ENDS(next file: page2)" and then the contents of the next file. In this way it display the contents of all the files. In between we can switch to previous or next files also using :n or :p for next and previous voyage.

lp (Line Printing): Printing A File:
Users are not allowed to directly access the  printer. But users can spool ( or line up) a job with others in a print queue. With spooling one can orderly print the jobs relieve the user from the necessity of administering the usage of print resources. The spooling facility is provided by the lp command. Take a look at following example:
$lp page1
request id is pr1-100(1 file )
This command will print a single copy of first page. This command also gives the request id. It is the combination of printer name & job number.
If there are more than one printer in a system, we have to use the -d option with the printer name if the printer name is dotmatrix.
$lp -ddotmatrix page1
The another option we can use with lp is -t ( title ) followed by the title string. this command prints the title on the first page.
$lp -t "first page" page1
We can cancel our jobs with cancel command which requires the request id of the job as an argument.
$ cancel pr1-100

file: To know the file types:
There are three of files. i.e. ordinary files directory files & device files. A regular file may contain simple text a C program or executable file. This file command is used to determine the type of file.
$file filename.lst
filename.lst:       English text
This command you can use with more filename also. We can apply this command to all the files in the progs directory. Just look at the following example ( * to indicate all the files )
$file /proc/*

wc: Line, Words and Character Count:
This command counts lines, words and characters depending on the options used. It takes one or more file name as an argument. Before you use wc command take a look at the following example

$create a file type some lines.
$ cat > filename 
$wc filename
This command offers three option to make specific count. This -l counts only number of lines, -w counts number of words and -c counts number of characters.
$wc -l filename
$wc -w filename
$wc -c filename
we can use this command with multiple file names, wc produces a separate line for each file.
$ wc file1 file2 file3


split: Splitting a file into multiple files:

Some file are so large that some times we found difficulties in editing them. So there's a need to split them. The split command breaks up its input into several equi-line segments and all these segments are created as separate file in  the current directory. Split by default break up a file into 1000 line pieces.
$split file1
This command creates a group of files xaa, xab, xac like this up to xzz. If the file is not large enough to occupy 1000 line you can override this default figure of 1000 by specifying the size.
$split -101 filename
Each file will contain 101 lines.

cmp: Comparing Two files:
Some times two files are exactly identical to each other, in such cases you can delete one of them. There are three commands to do this comparison one of them is cmp command. this command requires two file names as an argument.
$cmp file1 file2
This command compares the two files byte by byte and location of the first mismatch gives on the screen. The -l option gives a detailed list of byte number and differing bytes in octal for each character, that differs in both files.
$cmp -l file1 file2
If the two files are identical the cmp does not display many message but it simple returns the $ prompt.
diff: Differences in files:
The diff command compares two files, directories, etc and reports all differences between the two. It deals only with ASCII files. It's output format is designed to report the changes necessary to convert the first file into the second.
Syntax:
diff [option] file1 file2
Common Options
-b ignore trailing blanks
-i ignore the case of letters
-w ignore and characters
-e produce and output formated for use with the editor, ed
-r apply diff recursively through common sub-directories
Examples:
For the file1 and file2 files, the difference between them:
 Note that the output lists the differences as well as in which file the difference exists.
Lines in the first file are preceded by "<", and those in the second file are preceded by ">".

File Permissions:
Each file, directory and executable has permission set for who can rad, write, and/or execute it. To find the permissions assigned to a file, the ls command with the -l option should be used. Also using the -g option with "ls -l" will help when it is necessary to know the group for which the permissions are set ( BSD only ).
When using the "ls -lg" command on a file ( ls -l on sysV), the output will appear as follows.
-rwx-r-x--- user unixgroup size month nn hh:mm filename
The are above designated by letters and dashes ( -rwx-r-x---) is the area showing the file type and permissions. Therefore, a permission string for example of -rwx-r-x--- allows the use ( owner ) of the read, write and execute it; those in the unixgroup of the file can rad and execute it; others cannot access it at all.

Chmod - Change File Permissions:
The command to change permissions on an item ( file, directory, etc) is chmod ( changes mode). The syntax involves using the command with three digits ( representing the user ( owner, u) permission, the group (g) permissions, and other (o) user's permissions) followed by the argument (which may be a file name or list of files and directories). Or by using symbolic representation for the permissions and who they apply to.
Each of the permission types is represented by either a numeric equivalent:
read=4, write=2, execute=1
or single letter:
read-=r, write=w, execute=x
A permission of 4 or r would specify read permissions. If the permission desired are read and write, the 4 (representing rad) and the 2 (representing write) are added together to make a permission of 6. Therefore, a permission setting of 6 would allow read and write permissions.
Alternatively you could use symbolic notation which uses the one letter representation for who and for the permissions and an operator, where the operator can be:
+ add permissions
- remove permissions
= set permissions
So to set read and write the owner we could use "u=rw" in symbolic notation.
Syntax:
$ chmod nnn [argument list] numeric mode
$ chmod [ who]op[prem] [argument list] symbolic mode 
Where nnn are the three numbers representing user, group and other permissions, who is any u, g, o or a (all) and perm is any of r, w, x . In symbolic natation you can separate permission specifications by commas as shown in the example below.
Common Options:
-f force (no error message is generated if the change is unsucessful)
-R recursively descend through the directory structure and change the modes.

Example:
If the permission desired for file1 is user: read, write, execute, group: read, execute, other: rad, execute the command to use would be
$ chmod 755 file2 or chmod u=rwx,go=rx file1
Reminder: When giving permissions to group and other to use a file, it is necessary to allow a least execute permission to the directories for the path in which the file is located. The easiest way to do this is to be in the directory for which permissions need to be granted.
chmod 7111. or chmod u=rw,+x. or chmod u=rwx.go=x.
where the dot (.) indicates this directory.

Shell
Shell is command interpreter that execute commands. A Unix shell is both a command interpreter, which provides the user interface to the rich set of unix utilities, and a programming language allowing these utilities to be combined. Files containing commands can be created and become commands themselves. These new commands have the same status as system command in directories like '/bin', allowing users or groups to establish custom environments.
A shell allows execution of unix command both synchronously and asynchronously. The shell waits for synchronous commands to complete before accepting more input; asynchronous commands continue to execute in parallel with the shell while it reads and executes additional commands. The redirection construct permit fine-grained control of the input and output of those commands and the shell allows control over the contents of their environment. Unix shells also provide a small set of built in commands ( builtins ) implementing functionality impossible ( e.g. cd, break, continue and exec ), or inconvenient ( history, getopts, kill or pwd for example ) to obtain via separate utilities. Shells may be used interactively or non-interactively they accept input typed from the keyboard or from a file.
While executing commands is essential most of the power ( and complexity ) of shells is due to their embedded programming languages. Like any high-level language, the shell provides variables flow control constructs, quoting and functions.
Shells have begun offering features geared specifically for interactive use rather than to augment the programming language. These interactive features include job control command line editing history and aliases.
Shell Operations:
The following is a brief description of the shell's operation when it reads and executes a command. Basically the shell does the following:
Reads it input from a file, from a string supplied as an argument to the '-c' invocation or from the user's terminal.
Breaks the input into words and operators, obeying the quoting rules. These tokens are separated by metacharacters. Alias expansion is performed by this step.
Parses the tokens into simple and compound commands.
Performs the various shell expansions, breaking the expanded tokens into lists of filenames and commands and arguments.
Performs any necessary redirections and removes the redirection operators and their operands from the argument list.
Executes the command.
Optionally waits for the command to complete and collects its exit status.

comments:
In a non-interactive shell or an interactive shell in which the interactive_comments option to the shops built in is enabled a word beginning with '#' causes that word and all remaining characters on that line to be ignored. An interactive shell without the interactive_comments option enable does not allow comments. The interactive_comments option is on by default in interactive shells.

Pipelines:
A pipeline is a sequence of simple commands separated by '|'. The formate for a pipeline is -
[time [-p]] [!] command1 [|command2..]
The output of each command in the pipeline is connected to the input of the next command. That is each command reads the previous command's output.

Pattern Matching : Wild cards:
Up to this point we used certain commands with more than one filename (e.g., page1 page2, page3) as arguments. Often you may require to enter a number of such filenames in the command line. To illustrate this look at the following examples:
$ls -l file1 file2 file3
Each filename has a common string viz., page.  But this command line requires lot of typing work & doesn't look good. Fortunately shell offers solution to this.
The * And ?
this * is known as metacharacter. this character matches any number of characters. When this character is appended to the string page the pattern page *  expands in to all files in which first four characters constitute to string page. The previous command line can now be shortened with this sequence.
$ ls -l file*
Now when shell encounters the above command line it looks for metacharacters & identifies the * immediately. It then matches all the files in the current directory with the pattern page start and replace the pattern with the list that matches the pattern. It then reconstructs the command line as below & passes it to the kernel for execution.
$ls -l file1 file2 file3
The next character is ? , which matches a single character. When used with the same string page (page?) the shell matches all five character filenames beginning with page. Placing another? at the end of this string creates the pattern page?? where page is followed by two characters. Look at the following example.
$ls -x file?

Pattern Matching
Any character that appears in a pattern other than the special pattern characters described below, matches itself. The NULL character may not occur in a pattern. The special pattern characters must be quoted if they are to be matched literally.
The special pattern characters have the following meanings:
* Matches any string including the null string.
? Matches any single character.
[...] Matches any one of the enclosed characters. A pair of characters separated by a minus sign denotes a range; any character lexically between those two character, inclusive is matched. If the first character following the `['is a ` !' or a `^' then any character not enclosed is matched. A `_' may be matched by including it as the first or last character in the set. A `}' may be matched by including it as the first character in the set. Within `[' and `]' , character classes can be specified using the syntax [:class:], where class is one of the following classes defined in the POSIX.2 standard:
alnum alpha ascii blank cntrl digit graph lower print punct space upper sdigit
A character class matches any character belonging to that class. Within `[' and `]', an equivalence class can be specified using the syntax [=c=], which matches all characters with the same collation weight ( as defined by the current locale ) as the character c. Within `[' and `]', the syntax [.symbol.] matches the collating symbol symbol.
If the extglob shell option is enable using the shopt built in, several extended pattern-matching operators are recognized. In the following description, a pattern-list is a list of one or more patterns separated by a `|'. Composite patterns may be formed using one or more of the following sub-patterns:
? ( pattern-list)
         Matches zero or one occurrence of the given patterns.
* ( pattern-list)
        Matches zero or more occurrences of the given patterns.
+ ( pattern-list)
        Matches one or more occurrences of the given patterns.
@  ( pattern-list)
       Matches exactly one of the given patterns.
! ( pattern-list)
       Matches anything except one of the given patterns.

When Wild - Cards lose their meaning:
The metacharacters * and ? lose their meaning when used inside the class & are matched literally. Similarly - & ! also lose their significance when placed outside the class. Additionally ! loses it's meaning when placed anywhere but at the beginning of the class. The - loses it's meaning if it is not bounded properly on either side by a single character.
Matching the Dot:
The * doesn't match all the files beginning with a . (dot) or the / of a pathename. If you want to list all the hidden files in your directory having at least 3 characters after the dot, then the dot or the must be matched explicitly.
$ls -x .???*

When using rm with *:
We use these metacharacter to speed up our interaction with the system but there are great risks involved if you are not alert. When using shell metacharacters  especially the *, you could be totally at sea if instead of typing
$rm file*
Which removes all file. You inadvertently introduces space.
$rm file * ( space between page and * )
Even if there is not file named file in the current directory, and you get an error message to indicate that the command will still removed all files in this directory because of singular existence of *. This happens because shell rightly treats the * as a separate argument.
Escaping: The Backslash (\):
It is generally accepted principle that file names should not contain the shell meta characters. What happens if we do that imagine a file named page* created with the >.
Symbol:
$>file*    # you can create any empty file like this 

How do we remove this file without deleting other files. ( If we have other files named  file1, file2, file3). the shell uses another characters \(Backslash), to remove the special meaning of any metacharacter placed after it. Here the shell has to be told that the asterisk has to treated & matched literally, instead of being interpreted as metacharacter use the \ before * to solve  the problem.
$ ls -x file\*
file*
$ rm file\*
Escaping The < ENTER> Key:
When you enter a long chain of commands you can split the command line by hitting enter key. But only after the \ escapes this key.
$wc -l file1 \ ( Enter)
> file2 file3 ( Press Enter )


Quoting:
What happens when we enter a shell metacharacter with echo command as below:
$echo *
showing files and directories
 You simply see list of all files in the current directory. Why that happens? The shell uses the * meta character to match the files in the current directory. All files matched so you see all of them in the output. Suppose you intend to literally echo a * without permitting the shell to interfere. Escaping with a \, solves the problem.
$echo \*
There is another solution. When the argument to a command is enclosed in quotes the meaning of all special characters are turn off.
$echo '*' ( you can use double quotes also )

Quoting Preserves Spaces:
Whenever a shell finds contiguous spaces & tabs in the command line, it compresses them to a single space. Look at the following example:
$echo write one or two lines with so many space.
The above arguments to echo could have been preserved by escaping the space character, whenever it occurs at least twice.
$ echo dflfjdasl\ \ \sdafsda\ \ \sdfsad\ \    \sdfsad\.
In this situation quoting is recommended.
$echo "sdfklsajlfjsadl;"

grep: Searching For A Pattern:
grep is one of the most useful unix filter. It scans the file for a specific occurrence of a pattern, and displays the selected pattern the filenames where that pattern occurs or the line number in which they found. The syntax for it is as follows- grep option pattern filename (s)
Look at the following example
$ps -aux | gerp firefox
grep when unable to find the required pattern then it silently returns the prompt. Take a look at this -
$ grep production record1.lst
$-    #No production person found
It is generally safe to quote the pattern though always it is not required. Quoting is essential if the search string consist of more than one word. Try the following example -
$grep baiju damoder record.lst
gerp:can't open damodar
 gerp interprets damodar as a filename and so unable to open that file. but still searches the record1.lst and gives the result. To correct this problem you quote the pattern just like -
$ grep 'jiten' record1.lst
When we use grep with a series of string it interprets the first arguments as the pattern & the rest as filenames. It then displays along with the output.
$grep manager record1.lst record2.lst
grep options:

Option
Significance
-c
Displays count of number of occurances
-l
Displays list of filename only
-n
Display line number along with lines
-v
Displays all but lines matching expressions
-i
Ignores care for matching
-h
Omits filenamewhrn handling multiple files
-n
Display line & n lines above & below
 














Wednesday, February 29, 2012

User and Group Management

User and Group Management

Description:

Managing user accounts in linux is of paramount importance as users need an ID just to login to the system. Create, Delete and Change user attributes.

Introduction:

Linux being a multi-user, multitasking operating system, needs every user to possess a user name and a password. So to efficiently use linux we first need to know the process of creation of users and using the created users to logon and logout of the system.

We summaries a few points regarding user accounts.

  • Linux operating system is case sensitive. This imposes that we use commands in the lower case to get our work done. Command typed upper-case will throw errors.
  • Every on a Linux system needs a user account.
  • Every account has right ans privileges that very depending on the command and the directory. 
  • Linux users are organized into groups.
  • In RHEL user accounts are organized in the '/etc/passwd' file.
  • Their passwords are made more secure by the use of the '/etc/shadow' file.
  • When creating a new account the default parameters are configured in the '/etc/login.defs' file.
  • Configuration files are stored in another directory from where the files are copied to the user's home directory while the user is created.

Introduction to files that relate to the users of a system:

/etc/passwd
  •  Linux users are classified into three types
  1. Administrative (root)
  2. Regular
  3. Service
  • The administrative root account is automatically created when you install linux which has administrative privileges for all services on your linux box.
  • Regular users have the necessary privileges to perform standard task on linux computer. They can access programs such as word processors, databases and web browsers. They can store files in their home directories only.
  • Services such as Apache, ftp, Squid, nail, games, audio etc. have their own individual service accounts.
  • each column in the '/etc/passwd' file is delimited by a colon(:)

Column
Function
Comment
1
Username
Login name of the account
2
Password
If this field contains an X, the encrypted password is stored in /etc/shadow
3
UserID
A numeric ID for the usr. Assigned by the OS
4
GroupID
A numeric ID for the default group of the user. Assigned by the OS
5
Extra Information
Commonly used for the user's ral name. Can be any comment on the account
6
Home directory
The path to the user's home directory
7
Defaulty shell
The shell the user sees after logging in
 
/etc/shadow

  • This is the file that holds the password information of the users of the machine.
  • It is a read-only file for the root and no permissions are afforded for anyone else. Hence it is more secure.
  • If you use standard command to create new users, basic information is also added to this file based on the defaults in the '/etc/login.def
Column
Function
Comment
1
Username
Login name
2
Password
The encrypted password. Blank if the user has no password. A * signifies a password has not been defined. If the useris disable from logging in to the system an ! Is displayed.
3
Number of days
Last time the password was chagned in days after january 1,1970
4
Minimum password life
You can't change the password for at least this amount of time in days.
5
Maximum password life
You have to change the password after this period of time in days
6
Warning period
You get a warning this name days before your password expires
7
Disable account
If you don't use your account this many days after your password expires, you can't log in
8
Account expiration
If you don't use your account by this date, you won't be able to log in. Maybe in yyyy-mm-dd format or in number of days after january 1,1970
 
/etc/skel

  • Users have a default set of configuration files and directories.
  • The default list of these files is located in the '/etc/skel' directory.
  • you can see the files by listing the hidden contents of the skel directory. 
  • All new users will get a copy of these  files in their home directories.

Files
Purpose
.bashrc
The basic bash cofiguration file. May include a reference to the general /etc/bashrc
.bash_logout
A file executed when you exit a bash shell. Can include commands appropriate for this purpose, such as clearing your screen
.bash_profile
Configures the bash start up environment. Appropriate place to add environment variables or to modify the directories in your PATH
.gnome*
Servral directories that include start up setting for the GNOME desktop environment. For example, details of desktop icons such as Trash are stored in .gnome-desktop/Trash
.gtkrc
Adds the bluecurve theme for the default Red Hat GUI
.kde
A dirctory that includes auto start setting for the K Desktop environment. Not copied to the users home directories if you haven't installed KDE in the computer.

/etc/login.defs

  • When you create a new user the basic parameters come from the /etc/login.defs configuration file.
  • The version included with RedHat Linux includes settings for
  1. Email directories
  2. Password directories
  3. UserID numbers
  4. GroupID numbers
  5. Creating a home directory 
Adding Users - Creating and management of user account :

Login/Logout, Shutting down, Restarting a System and using commands to add users 
  • To logon to the system you should enter username and password
login: username
password: **********

  •  To create a user
#useradd username ( Read man page for more controlled user creation )
#man useradd

  • To set the password for the newly created user
#passwd username

  • To log off the system and enter as a new user
#exit
  • To shut-down a system
#init 0
#shutdown -h now
#poweroff

  • To restart a system
#init 6
#shutdown -r now
#reboot

Editing the /etc/passwd file directly:

  • Open the /etc/passwd file for editing using a text editor.
# vim /etc/passwd
  • Start a new line. the easiest way to do this is by copying the applicable information from a current user.
  • Substitute the information of your choice to create the new user.
  • For example change the user name as testuser, UID as 1010, GID as 1010, full name as "This is a test user" ( less the quotes), /home/testuser as the home directory.
  • Save and Exit.
  • Open the /etc/shadow file for editing using a test editor.
# vim /etc/shadow

  • Create a new line by copying the applicable information from a current user.
  • Change the group name as testuser and GID as 1010.
  • Save and exit.
  • Set up your new user's home directory.
#mkdir -p /home/testuser

  • Give the new user access to his home directory.
# chown testuser:testuser /home/testuser

  • Assign a new password with passwd command
# su -testuser

  • Copy the basic initialization files which are normally stored in the /etc/skel directory.
#cp -r /etc/skel/ /home/testuser 
#cd /home/testuser
#mv /etc/skel/* .           ---------------------------> (ignore error message)
#mv /etc/skel/.* .
#rmdir skel

  • Change the Ownership of the files and directories copied to the home directory of the user.
#chown testuser:testuser /home/testuser /home/testuser
  • Logout from the user's account.
  • Assuming you're using the default shadow password suite, run the pwconv and grpconv commands.

Deleting user account by deleting entries in the files:

  1. Delete the user's entry from the /etc/passed file.
  2. Delete the user's entry from the /etc/group file.
  3. Delete the user's entry from the /etc/shadow file.
  4. Delete the user's entry from the /etc/gshadow file.
  5. Delete the user's home directory after saving the files you need.

Commands:
Useradd -u 1010 -g 1010 -s /bin/bash -d /home/testuser -m testuser
#useradd username
#useradd -u UID username
#useradd -g GID username
#useradd -G username
#useradd -c username
#useradd -s username
#useradd -e username
#useradd -d username
#useradd -m username {to create the home directory if it does not exist}

Usermod -l -L -U:
usermod -l          {to change the user login name}
usermod -L         { to lock a user account}
usermod -U         {to unlock user accounts}

change:
chage  -m    { sets minimum life of password to days}
chage  -M    { sets maximum life of password to days}
chage  -I      {sets the number of days that an account can be
                                               inactive before it is locked}
chage -W     {sets an advance warning in says of an upcaming
                                              mandatory password change}
chage -l                    {list the current user password information. can be 
                                               used by regular users on their own accounts}

The Shadow password Suite:

  • The Shadow Password Suite features all of the commands related to managing Linux users and groups.
  • By default RedHat Linux uses this suite to provide additional security through encrypting passwords in the /etc/shadow and the /etc/gshadow files.
  • These files require commands to convert passwords to and from the companion /etc/passwd and the /etc/group configuration files.
  • These encrypted password files have more restrictive permissions than /etc/passwd or /etc/group; only the root user is allowed to only view these files and they are not writable by default.
 Converting User Passwords:

  • Two commands are associated with converting user password in the shadow password suite.
pwconv
pwunconv

pwconv:
  • Converts an existing /etc/passwd file. Passwords that currently exist in the /etc/passwd are replaced by a "x" the encrypted password, username, and the other relevant information are transferred to the /etc/shadow file.
  • If you've recently added new users by editing the /etc/passwd file in a text editor you must run this command again to convert the passwords associated with any new users.
pwunconv:

  • Password are transferred back to the /etc/passwd, and the /etc/shadow file is deleted.
  • Be careful as this also deletes any password aging information.

/etc/group:

  • Every linux user is assigned to a group.
  • By default every user gets their own private group.
  • This file has four columns as follows
Column
Function
Comment
1
Group name
By default red hat users are members of a group with their username
2
Password
If you see and 'x' in this column, see /etc/gshadow for the actual encrypted password
3
GroupID
By default redhat users have a GID same as their UID
4
Members
Includes the usrnames of others that are the members of the group

/etc/gshadow:

  • The RHEl /etc/gshadow configuration file for the group is analogous to the /etc/shadow file for users.
  • It specifies an encrypted password for applicable group, as well as administrators with privileges for a specific group.
  • This file has four columns as follows
Column
Function
Comment
1
Group name
You can create additional groups.
2
Password
The encrypted group password, added with the gpasswd command.
3
Group Administrator
The user is allowed to manage users in that group.
4
Group Members
Includes the usernames that are the members of the same group.

Group Commands:
  • #groupadd -g GID
  • #groupadd -n
  • #groupdel
  • #groups < username>          {displays the group memberships that the  
                                                       user  has}
  • #id                                         { to get ID information}