Scenario: suppose in a cluster environment one of the servers is about to
reach 100% CPU utilization ,remaining servers are in normal utilization
If this is a scenario then how it will occur and what is the reason for that case
and how to resolve these kind things?
I have come cross same situation. where one of the JVM is taking high CPU 97%. other are less CPU.
When i looked at the System out log , there are lot of threads are in hung state.
Solution:
1. Take the 3 thread dumps in interval of 1-2 mins. (Kill -3 PID)
2. Kill the that process ( kill -9 PID)
3. Restart the server/JVM
4. Analyze the dumps.
Monday, November 29, 2010
interview questions 1
1.Tell me about your roles and responsibilites
2.Do you know migration ?
3.Do you know shell scripting or wsadmin scripting?
4.what are the steps you have to take while installing a fix pack?
5.how to configure datasource ?
6.what is jndi ?
7.how to take a backup of application server ?
8.how the plugin file will work ?
9.what is a virtual host ? when we need a new virtual host?
10.what are different types of user registries ?
11.what is meant by federated repositry ?
12.how do you find the java version ?
13.In 6.1.2.3. what is 1, what is 2, what is 3
14.I have 2 appservers, i want to get all my requests to server1, if it fails then only it has to
redirect to server2. how do you configure this ?
15.My application server is fine at night while going, but when i came in the morning it is
stopped. Log files also doesn't contain information how do you trouble shoot ?
16.while federating an node if you got out of memory exception, how do you trouble shoot ?
17.explain about your environment ?
18.Explain how request flows in your environment?
19.what is the diff between base distinguish name and bind distinguish name ?
20.what is meant by JNDI ?
21.what is meant by out of memory exception ?
22.what is the use of garbage collector ?
23.what is meant by memory leak ?
24.what are the critical issues you faced and how do you solved ?
25.In which situtaion you have to re-generate a plugin?
26.What is sso? Did you ever configured sso?
2.Do you know migration ?
3.Do you know shell scripting or wsadmin scripting?
4.what are the steps you have to take while installing a fix pack?
5.how to configure datasource ?
6.what is jndi ?
7.how to take a backup of application server ?
8.how the plugin file will work ?
9.what is a virtual host ? when we need a new virtual host?
10.what are different types of user registries ?
11.what is meant by federated repositry ?
12.how do you find the java version ?
13.In 6.1.2.3. what is 1, what is 2, what is 3
14.I have 2 appservers, i want to get all my requests to server1, if it fails then only it has to
redirect to server2. how do you configure this ?
15.My application server is fine at night while going, but when i came in the morning it is
stopped. Log files also doesn't contain information how do you trouble shoot ?
16.while federating an node if you got out of memory exception, how do you trouble shoot ?
17.explain about your environment ?
18.Explain how request flows in your environment?
19.what is the diff between base distinguish name and bind distinguish name ?
20.what is meant by JNDI ?
21.what is meant by out of memory exception ?
22.what is the use of garbage collector ?
23.what is meant by memory leak ?
24.what are the critical issues you faced and how do you solved ?
25.In which situtaion you have to re-generate a plugin?
26.What is sso? Did you ever configured sso?
interview questions
Deloitte client round questions on 09-11-10
1.Tell me about your experience, roles, responsibilites, your educational background in 3 mins ?2. what is the difference between normal JVM and IBM JVM?
3.what are the different types of installations you performed ?
4. did you worked from intiall stage of your appserver or after going live or both ?
5.How do you deploy the application tell me the procedure starting from taking the build to deploying it ?
6.What is the best practice to setup connection pooling ?
7.What is active-active and active-passive ?
8.In your environment if you face out of memory exception, how do you trouble shoot and how you solve it ?
9.where do you find hung threads and how do you solve it ?
10.I have horizontal cluster, where i need to send all the request to a single node, how do you do it?
11.how many different people you will interact in your administration ?
12.If you don't know some issue that you are facing in your environment, how do you try to solve it ?
13.what is the release management procedure that you follow ?
14.Can you explain about your environment like how many application servers, how many clusters, how many stand-alone nodes and how many applications ?
15.what is the max heap size that you can specify for a jvm in windows and unix ?
16. how do you identify that an application server is hung ?
17. what are the maximum connections you can specify in connection pooling ?
18. what are the contents of heap dump ?
19. what is the tool that you are using to monitor the Performance of a JVM? Can you explain how can you judge that the performance of a JVM is low, by using this tool
20.What are the pro-active measures that you will take as an administrator ?
21.If you want to improve the response time and load , what can you do ?
Scenario application having performance problem
What is Failure and Load balance?
If a request sent to an application and retrieved an error then it is call failure.
We will get so many requests to our application. In a clustered environment all the requests equally distributed, that is called Load Balancing.
Suppose we have one application having performance problem. I mean it is taking request time more then how we will trouble shoot it? What are the log files we need to see?
Generally, poorely written code or data structures will create performance problem in an application. Then the performance will be degraded.
If there are two many firewalls presented from webserver to application server will also create performance problem.
If application server is getting lot much of requests then also it will create performance problem.
We need to check the app.server jvm logs and application logs to check the problem and also we need to check the database logs to check the problem.
To trobuleshoot this situation, we need to check all the above locations.
Suppose we have 10 applications in our environment? How the request goes to particular application? Could you please clear this?
A request will generally goto an exact application by the context root of that application.
If a request sent to an application and retrieved an error then it is call failure.
We will get so many requests to our application. In a clustered environment all the requests equally distributed, that is called Load Balancing.
Suppose we have one application having performance problem. I mean it is taking request time more then how we will trouble shoot it? What are the log files we need to see?
Generally, poorely written code or data structures will create performance problem in an application. Then the performance will be degraded.
If there are two many firewalls presented from webserver to application server will also create performance problem.
If application server is getting lot much of requests then also it will create performance problem.
We need to check the app.server jvm logs and application logs to check the problem and also we need to check the database logs to check the problem.
To trobuleshoot this situation, we need to check all the above locations.
Suppose we have 10 applications in our environment? How the request goes to particular application? Could you please clear this?
A request will generally goto an exact application by the context root of that application.
doubt-1
me: hi
what r the log files names of ,app.server jvm logs and application logs?
santosh.n 16: hi kishore
the logs are
Systemout.log
systemerr.log
nativestdout.lo g
nativestderr.lo g
trace.log
ffdc.log
activit.log
activity.log
me: which is having application logs?
santosh.n 16: systemout and systemerr
me: what type of ionformation contains systemout & systemerr files generally
?
santosh.n 16: any jvm related , appliction server start, stop , hung threads , out of memory etc
Sent at 4:12 PM on Monday
me: ok
hung threads , out of memory will be in systemour or systemErr file?
santosh.n 16: it comes in both .but mostly in systemout
me: ok thnks
& i have another doubt
that is i.e
usually in real time what will be max file size of logs will be given
after that max file size what we will do
santosh.n 16: 2 mB with log rotaion of 24 hrs and 3-5 historic file
me: ok
Sent at 4:17 PM on Monday
me: what is log rotation & historic file
Sent at 4:20 PM on Monday
santosh.n 16: logrotation is when the log file reaches 2MB it will backup the file as systeout1.log and start updating the systemout.log
historic is the no of backup files it maintanes
open admin console: -----> trouble shouting--> logging & tracing ----> selcet server (server1)---> JVM logs---> Configuration ( Log file rotation i.e max file size in mb, number of historical file)
open admin console: -----> trouble shouting--> logging & tracing ----> selcet server (server1)---> JVM logs---> Configuration ( Log file rotation i.e max file size in mb, number of historical file)
Sunday, November 28, 2010
Which type of tickets will come?
How the tickets will come and what are the types tickets?
In environment, if an application server is down, application is down, application server contains hung threads, cpu starvation, connection time out, webserver down we will get tickets.
Depending upon the business impact the tickets will generate.
Tickets are generally categorized into 5 times
P1, P2, P3, P4, P5.
If High number of users effecting or the business impact is more then we will get P1 ticket
For Ex: webserver down.
If medium number of users are effecting or the business impact is less then we will get p2 ticket.
For Ex: An appserver in a clustered environment is down.
If less number of users are effecting or the business impact is less then we will get p3.
For Ex: Users are getting 500 internal error when they are accessing an application.
If the business impact is very less then will get p4.
For Ex: Disk space reached the threshold limit.
Generally P5 tickets will come for configuration changes.
In environment, if an application server is down, application is down, application server contains hung threads, cpu starvation, connection time out, webserver down we will get tickets.
Depending upon the business impact the tickets will generate.
Tickets are generally categorized into 5 times
P1, P2, P3, P4, P5.
If High number of users effecting or the business impact is more then we will get P1 ticket
For Ex: webserver down.
If medium number of users are effecting or the business impact is less then we will get p2 ticket.
For Ex: An appserver in a clustered environment is down.
If less number of users are effecting or the business impact is less then we will get p3.
For Ex: Users are getting 500 internal error when they are accessing an application.
If the business impact is very less then will get p4.
For Ex: Disk space reached the threshold limit.
Generally P5 tickets will come for configuration changes.
websphere blog
http://websphere-solution.blogspot.com/2010/01/was-interview-questions-answers.html
http://websphere-solution.blogspot.com/2010/03/some-basic-definations-about-was.html
http://websphere-solution.blogspot.com/2010/04/was-interview-questions.html
http://websphere-solution.blogspot.com/2010/03/some-basic-definations-about-was.html
http://websphere-solution.blogspot.com/2010/04/was-interview-questions.html
Saturday, November 27, 2010
what is Release pack, fix pack, refresh pack, cumu
http://www.orkut.co.in/Main#CommMsgs?cmm=7180&tid=5477469168692180570&kw=fix
WAS questions
1) What is the Min and Maximum Heap Size ?
Ans: The default is 50 MB MIn and 256 MB Max
2) On what basis (min & Max) Heap Size is determined ?
Ans: This totally depends on the requirement of an application , If is a heavy application with many users , report generation etc then the heap should be more .
3) Is the Min & Max Heap Size is different in 32 bit and 64 bit Operating Systems ?
Ans:I am not sure of 64 bit but i guess it should be same .. can any one validate/confirm it
4) For a single JVM what is Max Heap Size ?
Ans:Not sure if IBM has set any max heap .. but i guess that would be dependent on the system memory available . if the system memory is avialiable then it can be increased
5) Scenario:
Physical Ram: 32 GB
A cell having 1 DMGR and 2 Federated Nodes and 2 Node agents, How can we determine how much we can set a Heap Size for these JVM's ?
For the dmgr : 512 MB to 1024 MB should be sufficient
Node agent : 512 MB should be sufficient
for the Appserver : Well that depends on the application requirement, but i have seen the max of 3 GB for a JVM and ig the performance degrades then analyse it and then increase it accordingly
NOTE: All the Heap allocation mentioned above are conditiions applied :-)
Hi,
Technically,we can set max 4 GB heap size for 32 bit JVM but there is no limit for 64 bit JVM.
Mximum size depends upon various factors like total memory size of server,operating syatem like window,linux,no of processes running on server etc.
In normal scenario,we dont go beyond 1 GB but if jvm does jobs like report generation then we can go up to 2 GB.
Ans: The default is 50 MB MIn and 256 MB Max
2) On what basis (min & Max) Heap Size is determined ?
Ans: This totally depends on the requirement of an application , If is a heavy application with many users , report generation etc then the heap should be more .
3) Is the Min & Max Heap Size is different in 32 bit and 64 bit Operating Systems ?
Ans:I am not sure of 64 bit but i guess it should be same .. can any one validate/confirm it
4) For a single JVM what is Max Heap Size ?
Ans:Not sure if IBM has set any max heap .. but i guess that would be dependent on the system memory available . if the system memory is avialiable then it can be increased
5) Scenario:
Physical Ram: 32 GB
A cell having 1 DMGR and 2 Federated Nodes and 2 Node agents, How can we determine how much we can set a Heap Size for these JVM's ?
For the dmgr : 512 MB to 1024 MB should be sufficient
Node agent : 512 MB should be sufficient
for the Appserver : Well that depends on the application requirement, but i have seen the max of 3 GB for a JVM and ig the performance degrades then analyse it and then increase it accordingly
NOTE: All the Heap allocation mentioned above are conditiions applied :-)
Hi,
Technically,we can set max 4 GB heap size for 32 bit JVM but there is no limit for 64 bit JVM.
Mximum size depends upon various factors like total memory size of server,operating syatem like window,linux,no of processes running on server etc.
In normal scenario,we dont go beyond 1 GB but if jvm does jobs like report generation then we can go up to 2 GB.
bit websphere concole problem Isclite.ear
In our environment Demployment manager islite ear got corrupted and not opening from url , please any body the steps how to reinstall the DMGR islite.ear from command line. very urgent to make my DMGR get working like before.?
Note:- what about my applications and cluster that are federated to it when it comes up is it be wel and fine working like before are we have to do any configuration for them?
Hi
If administrative console is broken or was accidentally uninstalled, re-deployment (reinstall) of console-application can be done with jython script deployConsole.py located in bin-folder.
1. First a clean removal of the old admin-console deployment is nessacary
"<WAS_HOME>"/bin/wsadmin.sh -lang jython -f deployConsole.py remove
2. Now reinstall of administrative console (isclite) will complete without errors (hopefully)
"<WAS_HOME>"/bin/wsadmin.sh -lang jython -f deployConsole.py install
3. Take a look at /"profile-root"/config/cells/"cell name"/nodes/"node name"/applications/isclite.ear/deploymen
ts/isclite/deployment.xml
3.1 Check if the <deploymenttargets>-Tag points to the correct server.
4. Take a look at /"profile root"/config/cells/"cell name"/nodes/"node name"/serverindex.xml
4.1 Check if the <deployedapplications>-Tag for application isclite is mapped to the correct server (server1 in base version of WAS).
NOTE : I dont think it should affect the configurations as the configs are isolated .. but i cant be sure as i have never tried it ... But its an intresting question
Can any one confirm this .....
Note:- what about my applications and cluster that are federated to it when it comes up is it be wel and fine working like before are we have to do any configuration for them?
Hi
If administrative console is broken or was accidentally uninstalled, re-deployment (reinstall) of console-application can be done with jython script deployConsole.py located in bin-folder.
1. First a clean removal of the old admin-console deployment is nessacary
"<WAS_HOME>"/bin/wsadmin.sh -lang jython -f deployConsole.py remove
2. Now reinstall of administrative console (isclite) will complete without errors (hopefully)
"<WAS_HOME>"/bin/wsadmin.sh -lang jython -f deployConsole.py install
3. Take a look at /"profile-root"/config/cells/"cell name"/nodes/"node name"/applications/isclite.ear/deploymen
3.1 Check if the <deploymenttargets>-Tag points to the correct server.
4. Take a look at /"profile root"/config/cells/"cell name"/nodes/"node name"/serverindex.xml
4.1 Check if the <deployedapplications>-Tag for application isclite is mapped to the correct server (server1 in base version of WAS).
NOTE : I dont think it should affect the configurations as the configs are isolated .. but i cant be sure as i have never tried it ... But its an intresting question
Can any one confirm this .....
Unix commands
http://www.ccl.net/cca/documents/dyoung/ topics-orig/unix.htmlhttp://www.washingt on.edu/computing/unix/unixqr.htmlhttp:// www.circle4.com/jaqui/papers/webunuk.htm l
http://www.comptechdoc.org/os/linux/use rsguide/linux_ugfilesp.html
on unix
ls -ltr
for
read=4
write=2
execute=1
if you need to change the permissions:
chmod user group owner fille name
chmod 755 file name ----only for files
chmod -R 755 filename --- directory
To change the owner ship :
chown user:group filename -- file
chown -Rh user:group directory -- Directory
for more information refer this link :
http://www.comptechdoc.org/os/linux/use rsguide/linux_ugfilesp.html
http://www.comptechdoc.org/os/linux/use
on unix
ls -ltr
for
read=4
write=2
execute=1
if you need to change the permissions:
chmod user group owner fille name
chmod 755 file name ----only for files
chmod -R 755 filename --- directory
To change the owner ship :
chown user:group filename -- file
chown -Rh user:group directory -- Directory
for more information refer this link :
http://www.comptechdoc.org/os/linux/use
Friday, November 26, 2010
Thursday, November 25, 2010
Scenario-3
What is the difference between SSL and global security. How can we configure them. Explain in details about their purpose and configuration thru admin console as well as wsadmin?
SSL is secure socket layer ,.... means securing the communication channel between two or more entities
Reason for ssl is encryption of the data which travells through a medium thereby increasing security and data interrity
SSL is a very Broad term , so ssl between browser and webserver , Webserver and appserver , between applications modules , bwtween appserver and MQ .. and so on ....
Whereas Global security is securing the admin console, who can login to admin console and what roles does that user have etc and how the application secuirty behaves .. ie by enabling the Java2security etc
SSL is secure socket layer ,.... means securing the communication channel between two or more entities
Reason for ssl is encryption of the data which travells through a medium thereby increasing security and data interrity
SSL is a very Broad term , so ssl between browser and webserver , Webserver and appserver , between applications modules , bwtween appserver and MQ .. and so on ....
Whereas Global security is securing the admin console, who can login to admin console and what roles does that user have etc and how the application secuirty behaves .. ie by enabling the Java2security etc
Scenario-2
If client is complaining that app is slow, then wat is the steps which needs to be followed up to resolve the issue?
Well there are many reasons for the slow behaviour
1) Check the CPU utilisation of the appserver and the entire server to see if there are cpu bottleneck
2) Check the Memory utilisation of the appserver and the entire server to see if there are memory bottleneck
3) Check the systemout.log to check if there are any hung threads or OOM
4) Check the heap allocated to the JVM
5) Check the connection to the DB , it could be possible that the connections to the DB has got MAXED out
6) Check the Plugin and webserver logs to see if its connection to has maxed
7) Take the thread dump for further analysis
these would be some of the steps to take
Well there are many reasons for the slow behaviour
1) Check the CPU utilisation of the appserver and the entire server to see if there are cpu bottleneck
2) Check the Memory utilisation of the appserver and the entire server to see if there are memory bottleneck
3) Check the systemout.log to check if there are any hung threads or OOM
4) Check the heap allocated to the JVM
5) Check the connection to the DB , it could be possible that the connections to the DB has got MAXED out
6) Check the Plugin and webserver logs to see if its connection to has maxed
7) Take the thread dump for further analysis
these would be some of the steps to take
Scenario-1
There is an application running on cluster. Both the app and JVMs are UP and running fine. But when we hit the URL for accessing the application, it is showing page can not be displayed. what is the troubleshooting steps to resolve the issue. I checked the systemout.log and systemerr.log but couldnt find anyhting helpful?
The Flow of Request happens from
Load Balancer >> Webserver >> Appserver >> DB
so the investigation for such issues should also follow the same routes .. ( well thats how i follow )
1) Try to check if the url is responding frm your end. ( this will eliminate if its specific to a user or its error for every one)
2) Check the Webserver if its running or not , If its not running start it
3) Check if the application server and application is running .. if not start it
4) Try to access the Aplication directly from the Appserver .. ie using the http transport port. ie wc_defalut .. ( this will identify if the error is due to App server or is with the Webserver/plugin)
5) check for the errors in the Appserver and web server logs
6) Check for the config in the plugin file to ensure that the url which is being hit is avaliable in the plugin-xml.cfg ( If it is not then it could be possible that the plugin was not regeneraed ad propagated after the deployment)
6) Enable the trace in the plugin-xml.cfg to understand in the plugin logs whether the plugin is forwarding the req to the appservers or not .
7) Lastly also check if the page which you are requesting is avaliable within the web modules
The Flow of Request happens from
Load Balancer >> Webserver >> Appserver >> DB
so the investigation for such issues should also follow the same routes .. ( well thats how i follow )
1) Try to check if the url is responding frm your end. ( this will eliminate if its specific to a user or its error for every one)
2) Check the Webserver if its running or not , If its not running start it
3) Check if the application server and application is running .. if not start it
4) Try to access the Aplication directly from the Appserver .. ie using the http transport port. ie wc_defalut .. ( this will identify if the error is due to App server or is with the Webserver/plugin)
5) check for the errors in the Appserver and web server logs
6) Check for the config in the plugin file to ensure that the url which is being hit is avaliable in the plugin-xml.cfg ( If it is not then it could be possible that the plugin was not regeneraed ad propagated after the deployment)
6) Enable the trace in the plugin-xml.cfg to understand in the plugin logs whether the plugin is forwarding the req to the appservers or not .
7) Lastly also check if the page which you are requesting is avaliable within the web modules
heap dump, thread dump and deadlocks in threads
A heapdump is a snapshot of JVM memory - it shows the live objects on the heap along with references between objects. It is used to determine memory usage patterns and memory leak suspects.
When a heapdump is created , a GC is run so that only the live objects are avaliable in the heapdump
generally when an Out of memory exception occurs it creates a heapdump .. though u can do it manually too .
A heapdump is really imp during troubleshooting of the performance, memory leak issues
...
Thread dump
A thread dump is a way of finding out what every thread in the JVM is doing at a particular point in time
so it gives which which methods are being run , any bottleneck and hangs etc
....
Memory Leak
IF any application eats more & more system memory and never seems to return memory back to the system untill large amount of physical memory allocation to that application then this is the sign of memory leak.
So there wouldn't not be sufficient memory to run live objects .
--------------------------------------------------------------------------------
Deadlock
this is when two threads each hold a resource that the other one wants. Each blocks, waiting for the resourse that it's waiting for to be released - and so the resources are never released, and the application hangs .
so to put it in simple way
Thread 1 is using resource A and it wants resource B
Thread 2 is using resource B and it wants resource A
so thread1 will say that only if Thread2 release Resource B then only it will release Resource A
and thread2 will say that only if thread1 release Resource A then only it will release ResourceB
and neither thread 1 or 2 give up and this results in deadlock situation which results in application hang
When a heapdump is created , a GC is run so that only the live objects are avaliable in the heapdump
generally when an Out of memory exception occurs it creates a heapdump .. though u can do it manually too .
A heapdump is really imp during troubleshooting of the performance, memory leak issues
...
Thread dump
A thread dump is a way of finding out what every thread in the JVM is doing at a particular point in time
so it gives which which methods are being run , any bottleneck and hangs etc
....
Memory Leak
IF any application eats more & more system memory and never seems to return memory back to the system untill large amount of physical memory allocation to that application then this is the sign of memory leak.
So there wouldn't not be sufficient memory to run live objects .
--------------------------------------------------------------------------------
Deadlock
this is when two threads each hold a resource that the other one wants. Each blocks, waiting for the resourse that it's waiting for to be released - and so the resources are never released, and the application hangs .
so to put it in simple way
Thread 1 is using resource A and it wants resource B
Thread 2 is using resource B and it wants resource A
so thread1 will say that only if Thread2 release Resource B then only it will release Resource A
and thread2 will say that only if thread1 release Resource A then only it will release ResourceB
and neither thread 1 or 2 give up and this results in deadlock situation which results in application hang
some error codes
for websphere related errors go through
http://www.redbooks.ibm.com/redbooks/pd fs/sg247461.pdf
for apache related errors go through
http://tools.ietf.org/html/rfc2616
Better to know the internals then just to know the error codes.
--------------------------------------------
ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var
ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var
ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var
ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var
ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var
ErrorDocument 410 /error/HTTP_GONE.html.var
ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var
ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var
ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.htm
l.var
ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.v ar
ErrorDocument 415 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.v ar
ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var
ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var
ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var
U can find ur error codes from HTTPServer/conf/httpd.conf
http://www.redbooks.ibm.com/redbooks/pd
for apache related errors go through
http://tools.ietf.org/html/rfc2616
Better to know the internals then just to know the error codes.
--------------------------------------------
ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var
ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var
ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var
ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var
ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var
ErrorDocument 410 /error/HTTP_GONE.html.var
ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var
ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var
ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.htm
ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.v
ErrorDocument 415 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.v
ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var
ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var
ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var
ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var
U can find ur error codes from HTTPServer/conf/httpd.conf
Wednesday, November 24, 2010
Linux commands-2
Source From: http://www.pixelbeat.org/docs/linux_commands.html
| Command | Description | |
| • | grep . /proc/sys/net/ipv4/* | List the contents of flag files |
| • | set | grep $USER | Search current environment |
| • | tr '\0' '\n' < /proc/$$/environ | Display the environment for any process |
| • | echo $PATH | tr : '\n' | Display the $PATH one per line |
| • | kill -0 $$ && echo process exists and can accept signals | Check for the existence of a process (pid) |
| • | find /etc -readable | xargs less -K -p'*ntp' -j $((${LINES:-25}/2)) | Search paths and data with full context. Use n to iterate |
| Low impact admin | ||
| # | apt-get install "package" -o Acquire::http::Dl-Limit=42 \ -o Acquire::Queue-mode=access | Rate limit apt-get to 42KB/s |
| echo 'wget url' | at 01:00 | Download url at 1AM to current dir | |
| # | apache2ctl configtest && apache2ctl graceful | Restart apache if config is OK |
| • | nice openssl speed sha1 | Run a low priority command (openssl benchmark) |
| • | renice 19 -p $$; ionice -c3 -p $$ | Make shell (script) low priority. Use for non interactive tasks |
| Interactive monitoring | ||
| • | htop -d 5 | Better top (scrollable, tree view, lsof/strace integration, ...) |
| • | iotop | What's doing I/O |
| # | watch -d -n30 "nice ps_mem.py | tail -n $((${LINES:-12}-2))" | What's using RAM |
| # | iftop | What's using the network. See also iptraf |
| # | mtr www.pixelbeat.org | ping and traceroute combined |
| Useful utilities | ||
| • | pv < /dev/zero > /dev/null | Progress Viewer for data copying from files and pipes |
| • | wkhtml2pdf http://.../linux_commands.html linux_commands.pdf | Make a pdf of a web page |
| • | timeout 1 sleep 3 | run a command with bounded time. See also timeout |
| Networking | ||
| • | python -m SimpleHTTPServer | Serve current directory tree at http://$HOSTNAME:8000/ |
| • | openssl s_client -connect www.google.com:443 </dev/null 2>&0 | openssl x509 -dates -noout | Display the date range for a site's certs |
| • | curl -I www.pixelbeat.org | Display the server headers for a web site |
| # | lsof -i tcp:80 | What's using port 80 |
| # | httpd -S | Display a list of apache virtual hosts |
| • | vim scp://user@remote//path/to/file | Edit a remote file directly in vim |
| • | curl -s http://www.pixelbeat.org/pixelbeat.asc | gpg --import | Import a gpg key from the web |
| • | tc qdisc add dev lo root handle 1:0 netem delay 20msec | Add 20ms latency to loopback device (for testing) |
| • | tc qdisc del dev lo root | Remove latency added above |
| Notification | ||
| • | echo "DISPLAY=$DISPLAY xmessage cooker" | at "NOW +30min" | Popup reminder |
| • | notify-send "subject" "message" | Display a gnome popup notification |
| echo "mail -s 'go home' P@draigBrady.com < /dev/null" | at 17:30 | Email reminder | |
| uuencode file name | mail -s subject P@draigBrady.com | Send a file via email | |
| ansi2html.sh | mail -a "Content-Type: text/html" P@draigBrady.com | Send/Generate HTML email | |
| Better default settings (useful in your .bashrc) | ||
| # | tail -s.1 -f /var/log/messages | Display file additions more responsively |
| • | seq 100 | tail -n $((${LINES:-12}-2)) | Display as many lines as possible without scrolling |
| # | tcpdump -s0 | Capture full network packets |
| Useful functions/aliases (useful in your .bashrc) | ||
| • | md () { mkdir -p "$1" && cd "$1"; } | Change to a new directory |
| • | strerror() { python -c "import os; print os.strerror($1)"; } | Display the meaning of an errno |
| • | plot() { { echo 'plot "-"' "$@"; cat; } | gnuplot -persist; } | Plot stdin. (e.g: • seq 1000 | sed 's/.*/s(&)/' | bc -l | plot) |
| • | alias hd='od -Ax -tx1z -v' | Handy hexdump. (usage e.g.: • hd /proc/self/cmdline | less) |
| • | alias realpath='readlink -f' | Canonicalize path. (usage e.g.: • realpath ~/../$USER) |
| Multimedia | ||
| • | DISPLAY=:0.0 import -window root orig.png | Take a (remote) screenshot |
| • | convert -filter catrom -resize '600x>' orig.png 600px_wide.png | Shrink to width, computer generated images or screenshots |
| mplayer -ao pcm -vo null -vc dummy /tmp/Flash* | Extract audio from flash video to audiodump.wav | |
| ffmpeg -i filename.avi | Display info about multimedia file | |
| • | ffmpeg -f x11grab -s xga -r 25 -i :0 -sameq demo.mpg | Capture video of an X display |
| DVD | ||
| for i in $(seq 9); do ffmpeg -i $i.avi -target pal-dvd $i.mpg; done | Convert video to the correct encoding and aspect for DVD | |
| dvdauthor -odvd -t -v "pal,4:3,720xfull" *.mpg;dvdauthor -odvd -T | Build DVD file system. Use 16:9 for widescreen input | |
| growisofs -dvd-compat -Z /dev/dvd -dvd-video dvd | Burn DVD file system to disc | |
| Unicode | ||
| • | python -c "import unicodedata as u; print u.name(unichr(0x2028))" | Lookup a unicode character |
| • | uconv -f utf8 -t utf8 -x nfc | Normalize combining characters |
| • | printf '\300\200' | iconv -futf8 -tutf8 >/dev/null | Validate UTF-8 |
| • | printf 'ŨTF8\n' | LANG=C grep --color=always '[^ -~]\+' | Highlight non printable ASCII chars in UTF-8 |
| Development | ||
| • | gcc -march=native -E -v -</dev/null 2>&1|sed -n 's/.*-mar/-mar/p' | Show autodetected gcc tuning params. See also gcccpuopt |
| • | for i in $(seq 4); do { [ $i = 1 ] && wget http://url.ie/6lko -qO-|| ./a.out; } | tee /dev/tty | gcc -xc - 2>/dev/null; done | Compile and execute C code from stdin |
| • | cpp -dM /dev/null | Show all predefined macros |
| • | echo "#include <features.h>" | cpp -dN | grep "#define __USE_" | Show all glibc feature macros |
| gdb -tui | Debug showing source code context in separate windows | |
| Extended Attributes (Note you may need to (re)mount with "acl" or "user_xattr" options) | ||
| • | getfacl . | Show ACLs for file |
| • | setfacl -m u:nobody:r . | Allow a specific user to read file |
| • | setfacl -x u:nobody . | Delete a specific user's rights to file |
| setfacl --default -m group:users:rw- dir/ | Set umask for a for a specific dir | |
| getcap file | Show capabilities for a program | |
| setcap cap_net_raw+ep your_gtk_prog | Allow gtk program raw access to network | |
| • | getfattr -m- -d . | Show all extended attributes (includes selinux,acls,...) |
| • | setfattr -n "user.foo" -v "bar" . | Set arbitrary user attributes |
| BASH specific | ||
| • | echo 123 | tee >(tr 1 a) | tr 1 b | Split data to 2 commands (using process substitution) |
| meld local_file <(ssh host cat remote_file) | Compare a local and remote file (using process substitution) | |
| Multicore | ||
| • | taskset -c 0 nproc | Restrict a command to certain processors |
| • | find -type f -print0 | xargs -r0 -P$(nproc) -n10 md5sum | Process files in parallel over available processors |
| sort -m <(sort data1) <(sort data2) >data.sorted | Sort separate data files over 2 processors | |
Linux commands-1
Source From: http://www.pixelbeat.org/cmdline.html
| Command | Description | |
| • | apropos whatis | Show commands pertinent to string. See also threadsafe |
| • | man -t man | ps2pdf - > man.pdf | make a pdf of a manual page |
| which command | Show full path name of command | |
| time command | See how long a command takes | |
| • | time cat | Start stopwatch. Ctrl-d to stop. See also sw |
| dir navigation | ||
| • | cd - | Go to previous directory |
| • | cd | Go to $HOME directory |
| (cd dir && command) | Go to dir, execute command and return to current dir | |
| • | pushd . | Put current dir on stack so you can popd back to it |
| file searching | ||
| • | alias l='ls -l --color=auto' | quick dir listing |
| • | ls -lrt | List files by date. See also newest and find_mm_yyyy |
| • | ls /usr/bin | pr -T9 -W$COLUMNS | Print in 9 columns to width of terminal |
| find -name '*.[ch]' | xargs grep -E 'expr' | Search 'expr' in this dir and below. See also findrepo | |
| find -type f -print0 | xargs -r0 grep -F 'example' | Search all regular files for 'example' in this dir and below | |
| find -maxdepth 1 -type f | xargs grep -F 'example' | Search all regular files for 'example' in this dir | |
| find -maxdepth 1 -type d | while read dir; do echo $dir; echo cmd2; done | Process each item with multiple commands (in while loop) | |
| • | find -type f ! -perm -444 | Find files not readable by all (useful for web site) |
| • | find -type d ! -perm -111 | Find dirs not accessible by all (useful for web site) |
| • | locate -r 'file[^/]*\.txt' | Search cached index for names. This re is like glob *file*.txt |
| • | look reference | Quickly search (sorted) dictionary for prefix |
| • | grep --color reference /usr/share/dict/words | Highlight occurances of regular expression in dictionary |
| archives and compression | ||
| gpg -c file | Encrypt file | |
| gpg file.gpg | Decrypt file | |
| tar -c dir/ | bzip2 > dir.tar.bz2 | Make compressed archive of dir/ | |
| bzip2 -dc dir.tar.bz2 | tar -x | Extract archive (use gzip instead of bzip2 for tar.gz files) | |
| tar -c dir/ | gzip | gpg -c | ssh user@remote 'dd of=dir.tar.gz.gpg' | Make encrypted archive of dir/ on remote machine | |
| find dir/ -name '*.txt' | tar -c --files-from=- | bzip2 > dir_txt.tar.bz2 | Make archive of subset of dir/ and below | |
| find dir/ -name '*.txt' | xargs cp -a --target-directory=dir_txt/ --parents | Make copy of subset of dir/ and below | |
| ( tar -c /dir/to/copy ) | ( cd /where/to/ && tar -x -p ) | Copy (with permissions) copy/ dir to /where/to/ dir | |
| ( cd /dir/to/copy && tar -c . ) | ( cd /where/to/ && tar -x -p ) | Copy (with permissions) contents of copy/ dir to /where/to/ | |
| ( tar -c /dir/to/copy ) | ssh -C user@remote 'cd /where/to/ && tar -x -p' | Copy (with permissions) copy/ dir to remote:/where/to/ dir | |
| dd bs=1M if=/dev/sda | gzip | ssh user@remote 'dd of=sda.gz' | Backup harddisk to remote machine | |
| rsync (Network efficient file copier: Use the --dry-run option for testing) | ||
| rsync -P rsync://rsync.server.com/path/to/file file | Only get diffs. Do multiple times for troublesome downloads | |
| rsync --bwlimit=1000 fromfile tofile | Locally copy with rate limit. It's like nice for I/O | |
| rsync -az -e ssh --delete ~/public_html/ remote.com:'~/public_html' | Mirror web site (using compression and encryption) | |
| rsync -auz -e ssh remote:/dir/ . && rsync -auz -e ssh . remote:/dir/ | Synchronize current directory with remote one | |
| ssh (Secure SHell) | ||
| ssh $USER@$HOST command | Run command on $HOST as $USER (default command=shell) | |
| • | ssh -f -Y $USER@$HOSTNAME xeyes | Run GUI command on $HOSTNAME as $USER |
| scp -p -r $USER@$HOST: file dir/ | Copy with permissions to $USER's home directory on $HOST | |
| ssh -g -L 8080:localhost:80 root@$HOST | Forward connections to $HOSTNAME:8080 out to $HOST:80 | |
| ssh -R 1434:imap:143 root@$HOST | Forward connections from $HOST:1434 in to imap:143 | |
| ssh-copy-id $USER@$HOST | Install $USER's public key on $HOST for password-less log in | |
| wget (multi purpose download tool) | ||
| • | (cd dir/ && wget -nd -pHEKk http://www.pixelbeat.org/cmdline.html) | Store local browsable version of a page to the current dir |
| wget -c http://www.example.com/large.file | Continue downloading a partially downloaded file | |
| wget -r -nd -np -l1 -A '*.jpg' http://www.example.com/dir/ | Download a set of files to the current directory | |
| wget ftp://remote/file[1-9].iso/ | FTP supports globbing directly | |
| • | wget -q -O- http://www.pixelbeat.org/timeline.html | grep 'a href' | head | Process output directly |
| echo 'wget url' | at 01:00 | Download url at 1AM to current dir | |
| wget --limit-rate=20k url | Do a low priority download (limit to 20KB/s in this case) | |
| wget -nv --spider --force-html -i bookmarks.html | Check links in a file | |
| wget --mirror http://www.example.com/ | Efficiently update a local copy of a site (handy from cron) | |
| networking (Note ifconfig, route, mii-tool, nslookup commands are obsolete) | ||
| ethtool eth0 | Show status of ethernet interface eth0 | |
| ethtool --change eth0 autoneg off speed 100 duplex full | Manually set ethernet interface speed | |
| iwconfig eth1 | Show status of wireless interface eth1 | |
| iwconfig eth1 rate 1Mb/s fixed | Manually set wireless interface speed | |
| • | iwlist scan | List wireless networks in range |
| • | ip link show | List network interfaces |
| ip link set dev eth0 name wan | Rename interface eth0 to wan | |
| ip link set dev eth0 up | Bring interface eth0 up (or down) | |
| • | ip addr show | List addresses for interfaces |
| ip addr add 1.2.3.4/24 brd + dev eth0 | Add (or del) ip and mask (255.255.255.0) | |
| • | ip route show | List routing table |
| ip route add default via 1.2.3.254 | Set default gateway to 1.2.3.254 | |
| • | host pixelbeat.org | Lookup DNS ip address for name or vice versa |
| • | hostname -i | Lookup local ip address (equivalent to host `hostname`) |
| • | whois pixelbeat.org | Lookup whois info for hostname or ip address |
| • | netstat -tupl | List internet services on a system |
| • | netstat -tup | List active connections to/from system |
| windows networking (Note samba is the package that provides all this windows specific networking support) | ||
| • | smbtree | Find windows machines. See also findsmb |
| nmblookup -A 1.2.3.4 | Find the windows (netbios) name associated with ip address | |
| smbclient -L windows_box | List shares on windows machine or samba server | |
| mount -t smbfs -o fmask=666,guest //windows_box/share /mnt/share | Mount a windows share | |
| echo 'message' | smbclient -M windows_box | Send popup to windows machine (off by default in XP sp2) | |
| text manipulation (Note sed uses stdin and stdout. Newer versions support inplace editing with the -i option) | ||
| sed 's/string1/string2/g' | Replace string1 with string2 | |
| sed 's/\(.*\)1/\12/g' | Modify anystring1 to anystring2 | |
| sed '/ *#/d; /^ *$/d' | Remove comments and blank lines | |
| sed ':a; /\\$/N; s/\\\n//; ta' | Concatenate lines with trailing \ | |
| sed 's/[ \t]*$//' | Remove trailing spaces from lines | |
| sed 's/\([`"$\]\)/\\\1/g' | Escape shell metacharacters active within double quotes | |
| • | seq 10 | sed "s/^/ /; s/ *\(.\{7,\}\)/\1/" | Right align numbers |
| sed -n '1000{p;q}' | Print 1000th line | |
| sed -n '10,20p;20q' | Print lines 10 to 20 | |
| sed -n 's/.*<title>\(.*\)<\/title>.*/\1/ip;T;q' | Extract title from HTML web page | |
| sed -i 42d ~/.ssh/known_hosts | Delete a particular line | |
| sort -t. -k1,1n -k2,2n -k3,3n -k4,4n | Sort IPV4 ip addresses | |
| • | echo 'Test' | tr '[:lower:]' '[:upper:]' | Case conversion |
| • | tr -dc '[:print:]' < /dev/urandom | Filter non printable characters |
| • | tr -s '[:blank:]' '\t' </proc/diskstats | cut -f4 | cut fields separated by blanks |
| • | history | wc -l | Count lines |
| set operations (Note you can export LANG=C for speed. Also these assume no duplicate lines within a file) | ||
| sort file1 file2 | uniq | Union of unsorted files | |
| sort file1 file2 | uniq -d | Intersection of unsorted files | |
| sort file1 file1 file2 | uniq -u | Difference of unsorted files | |
| sort file1 file2 | uniq -u | Symmetric Difference of unsorted files | |
| join -t'\0' -a1 -a2 file1 file2 | Union of sorted files | |
| join -t'\0' file1 file2 | Intersection of sorted files | |
| join -t'\0' -v2 file1 file2 | Difference of sorted files | |
| join -t'\0' -v1 -v2 file1 file2 | Symmetric Difference of sorted files | |
| math | ||
| • | echo '(1 + sqrt(5))/2' | bc -l | Quick math (Calculate φ). See also bc |
| • | echo 'pad=20; min=64; (100*10^6)/((pad+min)*8)' | bc | More complex (int) e.g. This shows max FastE packet rate |
| • | echo 'pad=20; min=64; print (100E6)/((pad+min)*8)' | python | Python handles scientific notation |
| • | echo 'pad=20; plot [64:1518] (100*10**6)/((pad+x)*8)' | gnuplot -persist | Plot FastE packet rate vs packet size |
| • | echo 'obase=16; ibase=10; 64206' | bc | Base conversion (decimal to hexadecimal) |
| • | echo $((0x2dec)) | Base conversion (hex to dec) ((shell arithmetic expansion)) |
| • | units -t '100m/9.58s' 'miles/hour' | Unit conversion (metric to imperial) |
| • | units -t '500GB' 'GiB' | Unit conversion (SI to IEC prefixes) |
| • | units -t '1 googol' | Definition lookup |
| • | seq 100 | (tr '\n' +; echo 0) | bc | Add a column of numbers. See also add and funcpy |
| calendar | ||
| • | cal -3 | Display a calendar |
| • | cal 9 1752 | Display a calendar for a particular month year |
| • | date -d fri | What date is it this friday. See also day |
| • | [ $(date -d "tomorrow" +%d) = "01" ] || exit | exit a script unless it's the last day of the month |
| • | date --date='25 Dec' +%A | What day does xmas fall on, this year |
| • | date --date='@2147483647' | Convert seconds since the epoch (1970-01-01 UTC) to date |
| • | TZ='America/Los_Angeles' date | What time is it on west coast of US (use tzselect to find TZ) |
| • | date --date='TZ="America/Los_Angeles" 09:00 next Fri' | What's the local time for 9AM next Friday on west coast US |
| locales | ||
| • | printf "%'d\n" 1234 | Print number with thousands grouping appropriate to locale |
| • | BLOCK_SIZE=\'1 ls -l | Use locale thousands grouping in ls. See also l |
| • | echo "I live in `locale territory`" | Extract info from locale database |
| • | LANG=en_IE.utf8 locale int_prefix | Lookup locale info for specific country. See also ccodes |
| • | locale | cut -d= -f1 | xargs locale -kc | less | List fields available in locale database |
| recode (Obsoletes iconv, dos2unix, unix2dos) | ||
| • | recode -l | less | Show available conversions (aliases on each line) |
| recode windows-1252.. file_to_change.txt | Windows "ansi" to local charset (auto does CRLF conversion) | |
| recode utf-8/CRLF.. file_to_change.txt | Windows utf8 to local charset | |
| recode iso-8859-15..utf8 file_to_change.txt | Latin9 (western europe) to utf8 | |
| recode ../b64 < file.txt > file.b64 | Base64 encode | |
| recode /qp.. < file.qp > file.txt | Quoted printable decode | |
| recode ..HTML < file.txt > file.html | Text to HTML | |
| • | recode -lf windows-1252 | grep euro | Lookup table of characters |
| • | echo -n 0x80 | recode latin-9/x1..dump | Show what a code represents in latin-9 charmap |
| • | echo -n 0x20AC | recode ucs-2/x2..latin-9/x | Show latin-9 encoding |
| • | echo -n 0x20AC | recode ucs-2/x2..utf-8/x | Show utf-8 encoding |
| CDs | ||
| gzip < /dev/cdrom > cdrom.iso.gz | Save copy of data cdrom | |
| mkisofs -V LABEL -r dir | gzip > cdrom.iso.gz | Create cdrom image from contents of dir | |
| mount -o loop cdrom.iso /mnt/dir | Mount the cdrom image at /mnt/dir (read only) | |
| cdrecord -v dev=/dev/cdrom blank=fast | Clear a CDRW | |
| gzip -dc cdrom.iso.gz | cdrecord -v dev=/dev/cdrom - | Burn cdrom image (use dev=ATAPI -scanbus to confirm dev) | |
| cdparanoia -B | Rip audio tracks from CD to wav files in current dir | |
| cdrecord -v dev=/dev/cdrom -audio -pad *.wav | Make audio CD from all wavs in current dir (see also cdrdao) | |
| oggenc --tracknum='track' track.cdda.wav -o 'track.ogg' | Make ogg file from wav file | |
| disk space (See also FSlint) | ||
| • | ls -lSr | Show files by size, biggest last |
| • | du -s * | sort -k1,1rn | head | Show top disk users in current dir. See also dutop |
| • | du -hs /home/* | sort -k1,1h | Sort paths by easy to interpret disk usage |
| • | df -h | Show free space on mounted filesystems |
| • | df -i | Show free inodes on mounted filesystems |
| • | fdisk -l | Show disks partitions sizes and types (run as root) |
| • | rpm -q -a --qf '%10{SIZE}\t%{NAME}\n' | sort -k1,1n | List all packages by installed size (Bytes) on rpm distros |
| • | dpkg-query -W -f='${Installed-Size;10}\t${Package}\n' | sort -k1,1n | List all packages by installed size (KBytes) on deb distros |
| • | dd bs=1 seek=2TB if=/dev/null of=ext3.test | Create a large test file (taking no space). See also truncate |
| • | > file | truncate data of file or create an empty file |
| monitoring/debugging | ||
| • | tail -f /var/log/messages | Monitor messages in a log file |
| • | strace -c ls >/dev/null | Summarise/profile system calls made by command |
| • | strace -f -e open ls >/dev/null | List system calls made by command |
| • | ltrace -f -e getenv ls >/dev/null | List library calls made by command |
| • | lsof -p $$ | List paths that process id has open |
| • | lsof ~ | List processes that have specified path open |
| • | tcpdump not port 22 | Show network traffic except ssh. See also tcpdump_not_me |
| • | ps -e -o pid,args --forest | List processes in a hierarchy |
| • | ps -e -o pcpu,cpu,nice,state,cputime,args --sort pcpu | sed '/^ 0.0 /d' | List processes by % cpu usage |
| • | ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS | List processes by mem (KB) usage. See also ps_mem.py |
| • | ps -C firefox-bin -L -o pid,tid,pcpu,state | List all threads for a particular process |
| • | ps -p 1,2 | List info for particular process IDs |
| • | last reboot | Show system reboot history |
| • | free -m | Show amount of (remaining) RAM (-m displays in MB) |
| • | watch -n.1 'cat /proc/interrupts' | Watch changeable data continuously |
| • | udevadm monitor | Monitor udev events to help configure rules |
| system information (see also sysinfo) ('#' means root access is required) | ||
| • | uname -a | Show kernel version and system architecture |
| • | head -n1 /etc/issue | Show name and version of distribution |
| • | cat /proc/partitions | Show all partitions registered on the system |
| • | grep MemTotal /proc/meminfo | Show RAM total seen by the system |
| • | grep "model name" /proc/cpuinfo | Show CPU(s) info |
| • | lspci -tv | Show PCI info |
| • | lsusb -tv | Show USB info |
| • | mount | column -t | List mounted filesystems on the system (and align output) |
| • | grep -F capacity: /proc/acpi/battery/BAT0/info | Show state of cells in laptop battery |
| # | dmidecode -q | less | Display SMBIOS/DMI information |
| # | smartctl -A /dev/sda | grep Power_On_Hours | How long has this disk (system) been powered on in total |
| # | hdparm -i /dev/sda | Show info about disk sda |
| # | hdparm -tT /dev/sda | Do a read speed test on disk sda |
| # | badblocks -s /dev/sda | Test for unreadable blocks on disk sda |
| interactive (see also linux keyboard shortcuts) | ||
| • | readline | Line editor used by bash, python, bc, gnuplot, ... |
| • | screen | Virtual terminals with detach capability, ... |
| • | mc | Powerful file manager that can browse rpm, tar, ftp, ssh, ... |
| • | gnuplot | Interactive/scriptable graphing |
| • | links | Web browser |
| • | xdg-open . | open a file or url with the registered desktop application |
questions asked at IBM
If application became slow then how to increase the performance of the application
What is the major issue that you faced recently
How many users are currently using the webserver, where we can this information?
Will u came across ibm pmr? For which reason
Difference between personal certificate and signer certificate
How can u configure session affinity
How can u know no.of conncetions in JDBC, when application is running
If u receive an application with bad request then how can u resolve that issue
By default how many no.of max requests the application server can serve at a given point of time
If there is application not responding properly in clustered env? How do u know the particular prob?
How do you identify a req is hitting which app server in a clustered env thru command
How many levels of session timeout
What is the major issue that you faced recently
How many users are currently using the webserver, where we can this information?
Will u came across ibm pmr? For which reason
Difference between personal certificate and signer certificate
How can u configure session affinity
How can u know no.of conncetions in JDBC, when application is running
If u receive an application with bad request then how can u resolve that issue
By default how many no.of max requests the application server can serve at a given point of time
If there is application not responding properly in clustered env? How do u know the particular prob?
How do you identify a req is hitting which app server in a clustered env thru command
How many levels of session timeout
Tuesday, November 23, 2010
WAS Info
1)how do we find the whether the installation has successfully completed or not?
how many ways we can find it ......
we can check in the log.txt in that you can find "instconfsuccefull" message and Even you can check with Installation verification Tool (IVT) also
================================================================
2)what is the scenario where we will go for Appserver profile and what is the scenario where will go for Custom profile and what is the diff b/w appserver profile and custom profile
We would install an Appserver profile if just need a standalone appserver . It has its own JVM and apps could be installed in it.
Anyways we could federate the appserver profile to the Cell whereever we want it in future .
Whereas a custom profile would be an empty profile ( no preinstalled Appservers .. config etc ) which would be federated to the Dmgr Cell and we can then create the Appserver with our own customized settings etc
Differences
Appserver profile --- Is a standalone appserver -- ie server1 is installed by default
Custom profile --- Its an Empty profile
Appserver profile -- It has its own Admin console to manage the server1 and other config
Custom Profile --- As its and empty profile with no appservers it doent have and admin console.
====================================================================
3)Suppose If I forgot my dmgr console username and password . Where I can get those credential details....
4)how do we configure the custome registry to the standalone application server...can any one explain me in details
how many ways we can find it ......
we can check in the log.txt in that you can find "instconfsuccefull" message and Even you can check with Installation verification Tool (IVT) also
================================================================
2)what is the scenario where we will go for Appserver profile and what is the scenario where will go for Custom profile and what is the diff b/w appserver profile and custom profile
We would install an Appserver profile if just need a standalone appserver . It has its own JVM and apps could be installed in it.
Anyways we could federate the appserver profile to the Cell whereever we want it in future .
Whereas a custom profile would be an empty profile ( no preinstalled Appservers .. config etc ) which would be federated to the Dmgr Cell and we can then create the Appserver with our own customized settings etc
Differences
Appserver profile --- Is a standalone appserver -- ie server1 is installed by default
Custom profile --- Its an Empty profile
Appserver profile -- It has its own Admin console to manage the server1 and other config
Custom Profile --- As its and empty profile with no appservers it doent have and admin console.
====================================================================
3)Suppose If I forgot my dmgr console username and password . Where I can get those credential details....
The username /password within the WAS are encrypted so i dont thing you can get the old pwd .. there is one thing though
Within the <DMGRPROFILE_HOME>/config/cells/Cellname /security.xml there is a option enable=true .. change it to false and restart the dmgr , it will now allow anonymous login .... then you can again set the new pwd for the dmgr console.
Within the <DMGRPROFILE_HOME>/config/cells/Cellname
(or)
In the real env
the user name and password of the dmgr wil be stored in the soap.client.prop file
in the /opt/IBM/WebSphere/AppServer/Profiles/Dm gr/properties/soap.client.prop
The user name is not encrypted..and the password only encrypted..you can decode the encrypted password using following url
http://www.sysman.nl/wasdecoder/
or otherwise you can search in the security.xml and there you will find the primayradminID=username and the sample ="password" (encrypted)
if you decode the password you can use the above url.....
=========================================
the user name and password of the dmgr wil be stored in the soap.client.prop file
in the /opt/IBM/WebSphere/AppServer/Profiles/Dm
The user name is not encrypted..and the password only encrypted..you can decode the encrypted password using following url
http://www.sysman.nl/wasdecoder/
or otherwise you can search in the security.xml and there you will find the primayradminID=username and the sample ="password" (encrypted)
if you decode the password you can use the above url.....
=========================================
4)how do we configure the custome registry to the standalone application server...can any one explain me in details
Its very easy to configure custom registry to standalone application server .
1)Create registry files for users and groups
2)Save this in a new folder .
3)open the console and select global security
under registries select custom registry
4)Give the server user id and password (Administrator user name and password)
5)save them and select custom properties and then click on new
6)Configure the userfile name and next give the path of the usersfile
7)similarly configure the groupsfile also.
8) select the authentication mechanisms of ltpa and give some password and confirm password.
9)Then enable gloabal security and disable java 2 security
10) select the ltpa authentication in authentication mechanisnms and custom registry in the registries and then save and restart the server
11)then give any username and password that u had created in the registry.
and next the concept is as same like creating the user and assigning roles to the user and accesing the console
1)Create registry files for users and groups
2)Save this in a new folder .
3)open the console and select global security
under registries select custom registry
4)Give the server user id and password (Administrator user name and password)
5)save them and select custom properties and then click on new
6)Configure the userfile name and next give the path of the usersfile
7)similarly configure the groupsfile also.
8) select the authentication mechanisms of ltpa and give some password and confirm password.
9)Then enable gloabal security and disable java 2 security
10) select the ltpa authentication in authentication mechanisnms and custom registry in the registries and then save and restart the server
11)then give any username and password that u had created in the registry.
and next the concept is as same like creating the user and assigning roles to the user and accesing the console
===========================================
5)does any tell me about wsadmin and the objects in wsadmin script?
Interacting with application server will be done by using administrative clients.
1. Browser based,
2. wsadmin
3. Script Based.
1. A browser based console will be open by using any internet explorer to administrate the appserver.
2. when coming wsadmin, here we have to invoke the wsadmin objects to administrate the appserver.
wsadmin objects are 5 types.
* AdminTask
* AdminControl
* AdminConfig
* AdminApp
* Help.
all these commands will be executed in command in the following location
<INSTALL_ROOT>/bin>wsadmin
3. Script based means we will run JACL/Jython scripts to administrate the appserver.
=================================
6)Using WAS command-line client wsadmin (run with root privileges):
. Open a connection to local WAS in offline mode
wsadmin -conntype NONE
2. Turn off global security
wsadmin> securityoff
3. Save
wsadmin> $AdminConfig save
you login the console now with no password
======================================
7)Policy files
There are 3 different policy files.
1)App.policy : This file contains the entries that are used by the developers to gain the access of any system resource like connecting to printer etc.
2)was.policy : This file contains all the default entries, by which we can access to default system resoureces.
3)Filter.policy ; The first 2 files are used by the developers to gain the access to system resources. The filter.policy file will be used by the was admin to block the accessing of system resources by the developer.
===============================================
8)hat is class loader policy.How many types are there. What is the main use of it?
The class loaders are used to load the class files to the JVM .
1. Browser based,
2. wsadmin
3. Script Based.
1. A browser based console will be open by using any internet explorer to administrate the appserver.
2. when coming wsadmin, here we have to invoke the wsadmin objects to administrate the appserver.
wsadmin objects are 5 types.
* AdminTask
* AdminControl
* AdminConfig
* AdminApp
* Help.
all these commands will be executed in command in the following location
<INSTALL_ROOT>/bin>wsadmin
3. Script based means we will run JACL/Jython scripts to administrate the appserver.
=================================
6)Using WAS command-line client wsadmin (run with root privileges):
. Open a connection to local WAS in offline mode
wsadmin -conntype NONE
2. Turn off global security
wsadmin> securityoff
3. Save
wsadmin> $AdminConfig save
you login the console now with no password
======================================
7)Policy files
There are 3 different policy files.
1)App.policy : This file contains the entries that are used by the developers to gain the access of any system resource like connecting to printer etc.
2)was.policy : This file contains all the default entries, by which we can access to default system resoureces.
3)Filter.policy ; The first 2 files are used by the developers to gain the access to system resources. The filter.policy file will be used by the was admin to block the accessing of system resources by the developer.
===============================================
8)hat is class loader policy.How many types are there. What is the main use of it?
The class loaders are used to load the class files to the JVM .
Parent First , Parent Last are class loader delegation models.these are not class loaders
class loaders in jvm are
1) Boot Strap classloader
2) extension class loader
3) Appplication class loader
1) Boot Strap classloader
2) extension class loader
3) Appplication class loader
=====================================================
9)
9)
diff b/w the connection pool and XA data source
connection Pool :
The reason for using connection pool is ...Opening and maintaining a database connection for each user, especially requests made to a dynamic database-driven website application, is costly and wastes resources.
So to avoid this the connection pool uses a cache of database connections maintained so that the connections can be reused when future requests to the database are required.
XA : its basically when you have to use a two-phase commit , An XA transaction, in the most general terms, is a "global transaction" that may span multiple resources.
So this would mean for eg if there are two DB which needs to be commited at the same time .. so in this case you would use an XA drivers
The reason for using connection pool is ...Opening and maintaining a database connection for each user, especially requests made to a dynamic database-driven website application, is costly and wastes resources.
So to avoid this the connection pool uses a cache of database connections maintained so that the connections can be reused when future requests to the database are required.
XA : its basically when you have to use a two-phase commit , An XA transaction, in the most general terms, is a "global transaction" that may span multiple resources.
So this would mean for eg if there are two DB which needs to be commited at the same time .. so in this case you would use an XA drivers
Deployment in PROD Env
In production env the deployments are usually script or autosys based .
This is to eliminate error while deployment and make the deployment faster
if the deployment is done graphically then then procedure is the same as in other env.
ie Install Application >> browse the ear >> next .. next .. next .. finish :-) ( obviously you need to select the appropriate mapping , virual host etc )
As a procedure per say its normally done this way
1) Take the backup of the old ear in both PROD and DR
2) Confirm with the AD wherether the ear staged is the latest one.
3) Network team flips the DNS to point to the DR
4) Do the deployment in DR
5) Check the logs and access the DR apps , test the application using the DR url
6) If the apps is working fine and its confirmed by the AD team then
7) Network Team to flip the PROD
8) Do the deployment in PROD
9) Check the logs and access the PROD apps , test the application using the PROD url
where AD = Application Development
DR = Disaster Recovery
This is to eliminate error while deployment and make the deployment faster
if the deployment is done graphically then then procedure is the same as in other env.
ie Install Application >> browse the ear >> next .. next .. next .. finish :-) ( obviously you need to select the appropriate mapping , virual host etc )
As a procedure per say its normally done this way
1) Take the backup of the old ear in both PROD and DR
2) Confirm with the AD wherether the ear staged is the latest one.
3) Network team flips the DNS to point to the DR
4) Do the deployment in DR
5) Check the logs and access the DR apps , test the application using the DR url
6) If the apps is working fine and its confirmed by the AD team then
7) Network Team to flip the PROD
8) Do the deployment in PROD
9) Check the logs and access the PROD apps , test the application using the PROD url
where AD = Application Development
DR = Disaster Recovery
Session Objects, Session affinity, Session persistence
Session Objects : Are used to store information needed for a particular user session. Variables stored in the Session object are not discarded when the user jumps between pages in the application; instead, these variables persist for the entire user session.
The Web server automatically creates a Session object when a Web page from the application is requested by a user who does not already have a session. The server destroys the Session object when the session expires or is abandoned.
One common use for the Session object is to store user preferences or a shoping cart ..
Session affinity :
In a clustered environment, any HTTP requests associated with an HTTP session must be routed to the same Web application in the same JVM. This ensures that all of the HTTP requests are processed with a consistent view of the user’s HTTP session.
This ensure all the request for the same sessions is processed by the same cluster member unless the server fails or under too high load..
Session affinity is maintained by
1)Cookies
2)SSL Tracking
3)URL Rewriting
For eg if cookies are enabled the session info is saved as JSESSIONID
The JSESSIONID cookie can be divided into four parts:
cache ID:
session ID:
separator:
clone ID: and partition ID (new in V6).
FJSESSION ID will include a partition ID instead of a clone ID when memory-to-memory replication in peer-to-peer mode is selected. Typically, the partition ID is a long numeric number.
For example the JSESSIONID cookie value of 0000HHAnbYWnNxGD-iVupvcArfr:14dtuueci is made up of these four parts
Cache ID 0000
Session ID: HHAnbYWnNxGD-iVupvcArfr
Separator :
Clone ID: 14dtuueci
--------------------
Session persistence : Sessions are stored within the memory .. and if the server crashes in the distributed environment this would mean the server having the session information is lost .
To ensure that in a clustered env the sessions information are not lost we use session persistence methods Viz database persistence and memory to memory replication
The Web server automatically creates a Session object when a Web page from the application is requested by a user who does not already have a session. The server destroys the Session object when the session expires or is abandoned.
One common use for the Session object is to store user preferences or a shoping cart ..
Session affinity :
In a clustered environment, any HTTP requests associated with an HTTP session must be routed to the same Web application in the same JVM. This ensures that all of the HTTP requests are processed with a consistent view of the user’s HTTP session.
This ensure all the request for the same sessions is processed by the same cluster member unless the server fails or under too high load..
Session affinity is maintained by
1)Cookies
2)SSL Tracking
3)URL Rewriting
For eg if cookies are enabled the session info is saved as JSESSIONID
The JSESSIONID cookie can be divided into four parts:
cache ID:
session ID:
separator:
clone ID: and partition ID (new in V6).
FJSESSION ID will include a partition ID instead of a clone ID when memory-to-memory replication in peer-to-peer mode is selected. Typically, the partition ID is a long numeric number.
For example the JSESSIONID cookie value of 0000HHAnbYWnNxGD-iVupvcArfr:14dtuueci is made up of these four parts
Cache ID 0000
Session ID: HHAnbYWnNxGD-iVupvcArfr
Separator :
Clone ID: 14dtuueci
--------------------
Session persistence : Sessions are stored within the memory .. and if the server crashes in the distributed environment this would mean the server having the session information is lost .
To ensure that in a clustered env the sessions information are not lost we use session persistence methods Viz database persistence and memory to memory replication
accesing console
Was console 7.o url
Hi frnds anybody plz tell me whats the url for was console in 7.0
iam useing this url but its not working..
http://localshost:9061/ibm/console
even i tried http://localhost:9060/ibm/console
iam useing this url but its not working..
http://localshost:9061/ibm/console
even i tried http://localhost:9060/ibm/console
sol:
Check if the dmgr is running .. you can go in the <WAS_HOME>/profiles/dmgr profile/bin
./serverStatus.sh
or ps -auxwww | grep dmgr
if its stopped start the dmgr .. using ./startManager.sh
to get the admin console port
<WAS_HOME>/profile/config/cells/<cellnam
Within the serverindex.xml there u will find the admin port ..
identify the port to be used
Then http://ipaddress:adminport/ibm/console
use the Ip address instead of localhost in the URL
-----------------------
A JVM is an enviroment where your Java codes run..
the JVMs within WAS would be process for DMGR , Nodeagent ,and Appservers,
diff b\w v6 and v7
Profiles
WebSphere 5.1:No Concepts of profile ,there are 4 types of Installation -Express,Base ,Network Deployment and Enterprise.
Websphere 6.1:Cell Profile,Deployment Manager profile,Application Server profile,Custom Profile
Websphere 7.0 Cell(DeploymentManager and managed node),Management,Application Server,Custom profile,Secure Proxy.
Note:Under Management there are three types of profiles available :Administrative agent
Deployment Manager
Job Manager
Note:The Main use of Job Manager is to queue jobs to application server in a flexible management environment
Managing Profiles
WebSphere 5.1 :Websphere multiple installation instance can be created using wsinstance script
WebSphere 6.1:There are two ways of managing a profile
1.Profile Management Tool(GUI)
2.Manage profiles(Command interface for managing profiles )
WebSphere 7.0: same as 6.1
Security Roles
WAS 5.1:Administrator,operator,configurator
WAS 6.1:Administrator,operator,configurator,Deployer,Admin Security Manager,ISC Admin
WAS 7.0:Administrator,operator,configurator,Deployer,Admin Security Manager,ISC Admin,Auditor
WebServers supported
WAS 5.1:Apache HttpServer,Domino Server,IHS,Microsoft IIS,Sun Java System Web Server,HTTP Server for iseries
WAS 6.1:Apache HttpServer,Domino Server,IHS,Microsoft IIS,Sun Java System Web Server
WAS 7.0:HTTPServer for Z/Os and all web servers supported in 6.1
User Registries/Repositries
WAS 5.1:Local Operating System,Standalone LDAP registry,Standalone Custom registry
WAS 6.1:Federated repositories,Local Operating System,Standalone LDAP registry,Standalone Custom registry or file based registry
WAS 7.0:Same as 6.1
lOGGING AND TRACING
WAS 5.1Diagnostic trace
JVM logs
Process logs
IBM Service logs
WAS 6.1
Apart from the logs available in 5.1 there is a Change log detail levels which will enable the Message level and trace level of the JVM
WAS 7.0Same as V 6.1
WebSphere 5.1:No Concepts of profile ,there are 4 types of Installation -Express,Base ,Network Deployment and Enterprise.
Websphere 6.1:Cell Profile,Deployment Manager profile,Application Server profile,Custom Profile
Websphere 7.0 Cell(DeploymentManager and managed node),Management,Application Server,Custom profile,Secure Proxy.
Note:Under Management there are three types of profiles available :Administrative agent
Deployment Manager
Job Manager
Note:The Main use of Job Manager is to queue jobs to application server in a flexible management environment
Managing Profiles
WebSphere 5.1 :Websphere multiple installation instance can be created using wsinstance script
WebSphere 6.1:There are two ways of managing a profile
1.Profile Management Tool(GUI)
2.Manage profiles(Command interface for managing profiles )
WebSphere 7.0: same as 6.1
Security Roles
WAS 5.1:Administrator,operator,configurator
WAS 6.1:Administrator,operator,configurator,
WAS 7.0:Administrator,operator,configurator,
WebServers supported
WAS 5.1:Apache HttpServer,Domino Server,IHS,Microsoft IIS,Sun Java System Web Server,HTTP Server for iseries
WAS 6.1:Apache HttpServer,Domino Server,IHS,Microsoft IIS,Sun Java System Web Server
WAS 7.0:HTTPServer for Z/Os and all web servers supported in 6.1
User Registries/Repositries
WAS 5.1:Local Operating System,Standalone LDAP registry,Standalone Custom registry
WAS 6.1:Federated repositories,Local Operating System,Standalone LDAP registry,Standalone Custom registry or file based registry
WAS 7.0:Same as 6.1
lOGGING AND TRACING
WAS 5.1Diagnostic trace
JVM logs
Process logs
IBM Service logs
WAS 6.1
Apart from the logs available in 5.1 there is a Change log detail levels which will enable the Message level and trace level of the JVM
WAS 7.0Same as V 6.1
Managing WebServers
WAS 5.1:Web Servers cannot be managed through Websphere Admin Console
WAS 6.1:WebServers can be Administered using the Websphere Admin Console (Stopping, Starting, Generation and propagation of Plug-in can be done). Web Servers can be created in Managed node or in Unmanaged node
WAS 7.0 same AS V 6.1
JMS
WAS 5.1:JMS Fail Over Support and scalability is not available
WAS 6.1:JMS Fail over support and scalability is available.SIB(Service Integration Bus Concept is being introduced)
WAS 7.0:Same as V 6.1
Monitoring
WAS 5.1:N/A
WAS 6.1:TPF(Tivoli Performance Viewer) is embedded in the Websphere Admin Console for monitoring WebSphere Objects
WAS 7.0same as V 6.1
SIP and Portlet Container
WAS 5.1:N/A
WAS 6.1SIP(Session Initiation Protocol) extends the application server to allow to run SIP applications written to JSR 116 Specification
The Portlet applications can deployed which is compliant with JSR 168
WAS 7.0same as V 6.1
wsadmin scripts
WAS 5.1:JACL is the scripting language which is used
WAS 6.1:JACL will be deprecated from 6.1 and Jython scripting will be used.
WAS 7.0:Same as V 6.1
WAS 5.1:Web Servers cannot be managed through Websphere Admin Console
WAS 6.1:WebServers can be Administered using the Websphere Admin Console (Stopping, Starting, Generation and propagation of Plug-in can be done). Web Servers can be created in Managed node or in Unmanaged node
WAS 7.0 same AS V 6.1
JMS
WAS 5.1:JMS Fail Over Support and scalability is not available
WAS 6.1:JMS Fail over support and scalability is available.SIB(Service Integration Bus Concept is being introduced)
WAS 7.0:Same as V 6.1
Monitoring
WAS 5.1:N/A
WAS 6.1:TPF(Tivoli Performance Viewer) is embedded in the Websphere Admin Console for monitoring WebSphere Objects
WAS 7.0same as V 6.1
SIP and Portlet Container
WAS 5.1:N/A
WAS 6.1SIP(Session Initiation Protocol) extends the application server to allow to run SIP applications written to JSR 116 Specification
The Portlet applications can deployed which is compliant with JSR 168
WAS 7.0same as V 6.1
wsadmin scripts
WAS 5.1:JACL is the scripting language which is used
WAS 6.1:JACL will be deprecated from 6.1 and Jython scripting will be used.
WAS 7.0:Same as V 6.1
--------------------------
Hi ,
Small correction...
User Registries/Repositries
WAS 6.1: 1)Federated repositories,
2)Local Operating System,
3)Standalone LDAP registry
4)Standalone Custom registry
file registry is the default user registry in websphere application server 6.1.It comes under federated repository plugin.
Small correction...
User Registries/Repositries
WAS 6.1: 1)Federated repositories,
2)Local Operating System,
3)Standalone LDAP registry
4)Standalone Custom registry
file registry is the default user registry in websphere application server 6.1.It comes under federated repository plugin.
WorkLoad Management
WorkLoad Management
Hi
Workload management (WLM) is a WebSphere feature which that provides load
balancing and affinity between application servers in a WebSphere clustered
environment.
WLM is really important for performance. WebSphere uses workload management to send requests to alternate members of the cluster.
WLM is configurable. ie we can configure to ensure that each machine or server in the cluster gets the fair share of the overall client load that is being processed by the system as a whole.
Some Points on WLM
1) Routing of requests occurs between the Web server plug-in and the
clustered application servers using HTTP or HTTP
2) This routing is based on weights that are associated with the cluster members. If all cluster members have identical weights, the plug-in sends an equal number of requests to all members of the cluster, assuming no strong affinity configurations.
3) If the weights are scaled in the range from zero to 20, the plug-in routes requests to those cluster members with the higher weight value more often.
4) No requests are sent to cluster members with a weight of zero.
Workload management (WLM) is a WebSphere feature which that provides load
balancing and affinity between application servers in a WebSphere clustered
environment.
WLM is really important for performance. WebSphere uses workload management to send requests to alternate members of the cluster.
WLM is configurable. ie we can configure to ensure that each machine or server in the cluster gets the fair share of the overall client load that is being processed by the system as a whole.
Some Points on WLM
1) Routing of requests occurs between the Web server plug-in and the
clustered application servers using HTTP or HTTP
2) This routing is based on weights that are associated with the cluster members. If all cluster members have identical weights, the plug-in sends an equal number of requests to all members of the cluster, assuming no strong affinity configurations.
3) If the weights are scaled in the range from zero to 20, the plug-in routes requests to those cluster members with the higher weight value more often.
4) No requests are sent to cluster members with a weight of zero.
Webserver and plugin work
a explanation of how does a Webserver and plugin work is
Within a http.conf file contains the location of the plugin-cfg.xml file and the plugin modules so when the request comes to the http.conf it will get redirected to the plugin file which will then route to the appropriate app server
so a http.conf file can contain only ONE pluging.xml file
so to answer your question i think it will not work , as the webserver will not be in a position to know which plugin-cfg.xml file should it forward the request
but you can configure 4 webservers to 4 plugins though
---------------------------------
Q2.similarly i have 4-web server and one plugin .in which web server is ready to handled the request out of four web server?
This could be possible if the 4 webservers are in the same machine .
you can modify the path of the plugin.xml file in each of the http.conf file to point to the same plugin file.
but you would need to do this before
1) Ensure that each of the webservers will be running on different ports
2) You will have to create a virtual host with those ports in the admin console
3) Generate and propagate the plugin
4) Modify the path of the plugin.xml in each http.conf file to the same location
5) restart each IHS
Test it using the individual urls for eg
http://<ipaddresss:port1/contextroot
http://<ipaddresss:port2/contextroot
http://<ipaddresss:port3/contextroot
If this for test practice its fine .. if its for implementation you also need to ensure that the loadbalancer is configured to route these ports
request ----- Load Balancer ------------- Ipaddress:port1
------------- Ipaddress:port2
Do correct me if i have understood it wrongly
Within a http.conf file contains the location of the plugin-cfg.xml file and the plugin modules so when the request comes to the http.conf it will get redirected to the plugin file which will then route to the appropriate app server
so a http.conf file can contain only ONE pluging.xml file
so to answer your question i think it will not work , as the webserver will not be in a position to know which plugin-cfg.xml file should it forward the request
but you can configure 4 webservers to 4 plugins though
---------------------------------
Q2.similarly i have 4-web server and one plugin .in which web server is ready to handled the request out of four web server?
This could be possible if the 4 webservers are in the same machine .
you can modify the path of the plugin.xml file in each of the http.conf file to point to the same plugin file.
but you would need to do this before
1) Ensure that each of the webservers will be running on different ports
2) You will have to create a virtual host with those ports in the admin console
3) Generate and propagate the plugin
4) Modify the path of the plugin.xml in each http.conf file to the same location
5) restart each IHS
Test it using the individual urls for eg
http://<ipaddresss:port1/contextroot
http://<ipaddresss:port2/contextroot
http://<ipaddresss:port3/contextroot
If this for test practice its fine .. if its for implementation you also need to ensure that the loadbalancer is configured to route these ports
request ----- Load Balancer ------------- Ipaddress:port1
------------- Ipaddress:port2
Do correct me if i have understood it wrongly
appliacation
I have 2 cluster members and 3 applications are running on that cluster. I want to stop only one application on one cluster members.But other 2apps are available on both cluster members.
without stopping the cluster member is there any chance to stop the application on cluster member
How to do this
sol:
without stopping the cluster member is there any chance to stop the application on cluster member
How to do this
sol:
as u want to stop the application only on one cluster member ?it is not possible through console and it is possible through wsadmin script
wsadmin>appstop=AdminControl.queryNames('type=ApplicationManager,node=urnodename,process=servername',*)
wsadmin>AdminControl.invoke(appstop,'stopApplication','applicationname')
Applications>Enterpriseapplications>application_name
now u place mouse over the '->' it shows it is partial started means the application is stopped on one of the server
do it and let me know if any doubts as i tried it on my system
another important point :this is jython script in our websphere default jacl is enabled to enable jython go to properties directory of every was home and every profiles directory and open wsadmin properties file and configure like this com.ibm.ws.scripting.defaultLang=jython
wsadmin>appstop=AdminControl.queryNames(
wsadmin>AdminControl.invoke(appstop,'sto
Applications>Enterpriseapplications>appl
now u place mouse over the '->' it shows it is partial started means the application is stopped on one of the server
do it and let me know if any doubts as i tried it on my system
another important point :this is jython script in our websphere default jacl is enabled to enable jython go to properties directory of every was home and every profiles directory and open wsadmin properties file and configure like this com.ibm.ws.scripting.defaultLang=jython
synchronization
Any deployement, config changes are done from the DMGR .. these changes are modified in the xml files in the master cell repository.. as the nodes in question needs this information .. there is a process of sync wherein the confurations are pushed to each nodes.
So the DMGR maintains a master cell repository and a local node repository managed by node agent
The process is using a Epoch and Digest
If a change to a configuration file is made through the administration programs (Administration console, wsadmin, or other), then the overall repository epoch and the epoch for the folder in which that file resides is modified.
During configuration synchronization operations, if the repository epoch has changed since the previous synchronization operation, then individual folder epochs will be compared.
If the epochs for corresponding node and cell directories do not match,
then the digests for all files in the directory are recalculated, including that changed file.
And only the modified files are pushed to the nodes and not the entire repository
This is the overview of the synchronization
------------------------------
In the Network Deployment Administration console, you can click on "System Administration" and then "Nodes" to see a list of the nodes in the cell. \
You will notice "Synchronize" and "Full Resynchronize" buttons on the page. The "Synchronize" button is "normal synchronization" (no re-reading of the files).
The "Full Resynchronize" button is the "reset and recalculate" function.
Select the node or nodes to be updated with manual changes and click on the "Full Resynchronize" button.
----------------------
--------------------
-----------
If u do any change by using Administrative clients like ws admin, console and command prompt; the configuration change will be in Master configuration(Dmgr). Here In master configuration it contains all the nodes info whicjh were federated. Later the changes updated to respective nodes. This is Sychronization.
Its of 2 types: automatic and manual. Changes using admin clients can be done automatically. But Manual changes like using text editors didnt update. Here we need to do manually. we use syncNode.sh from application server command promt like
syncNode localhost 8879(Dmgr SOAP). U can also do from console of dmge by clicking nodes under system administration.
By default automatic sync is set for 1min so for every 1min sync takes place. Ucan see it in nodeAgent-->FileSynchronization service.
So the DMGR maintains a master cell repository and a local node repository managed by node agent
The process is using a Epoch and Digest
If a change to a configuration file is made through the administration programs (Administration console, wsadmin, or other), then the overall repository epoch and the epoch for the folder in which that file resides is modified.
During configuration synchronization operations, if the repository epoch has changed since the previous synchronization operation, then individual folder epochs will be compared.
If the epochs for corresponding node and cell directories do not match,
then the digests for all files in the directory are recalculated, including that changed file.
And only the modified files are pushed to the nodes and not the entire repository
This is the overview of the synchronization
------------------------------
In the Network Deployment Administration console, you can click on "System Administration" and then "Nodes" to see a list of the nodes in the cell. \
You will notice "Synchronize" and "Full Resynchronize" buttons on the page. The "Synchronize" button is "normal synchronization" (no re-reading of the files).
The "Full Resynchronize" button is the "reset and recalculate" function.
Select the node or nodes to be updated with manual changes and click on the "Full Resynchronize" button.
----------------------
--------------------
-----------
If u do any change by using Administrative clients like ws admin, console and command prompt; the configuration change will be in Master configuration(Dmgr). Here In master configuration it contains all the nodes info whicjh were federated. Later the changes updated to respective nodes. This is Sychronization.
Its of 2 types: automatic and manual. Changes using admin clients can be done automatically. But Manual changes like using text editors didnt update. Here we need to do manually. we use syncNode.sh from application server command promt like
syncNode localhost 8879(Dmgr SOAP). U can also do from console of dmge by clicking nodes under system administration.
By default automatic sync is set for 1min so for every 1min sync takes place. Ucan see it in nodeAgent-->FileSynchronization service.
If client is complaining that app is slow, then wat is the steps which needs to be followed up to resolve the issue.
If client is complaining that app is slow, then wat is the steps which needs to be followed up to resolve the issue.
Hi
Well there are many reasons for the slow behaviour
1) Check the CPU utilisation of the appserver and the entire server to see if there are cpu bottleneck
2) Check the Memory utilisation of the appserver and the entire server to see if there are memory bottleneck
3) Check the systemout.log to check if there are any hung threads or OOM
4) Check the heap allocated to the JVM
5) Check the connection to the DB , it could be possible that the connections to the DB has got MAXED out
6) Check the Plugin and webserver logs to see if its connection to has maxed
7) Take the thread dump for further analysis
these would be some of the steps to take
Well there are many reasons for the slow behaviour
1) Check the CPU utilisation of the appserver and the entire server to see if there are cpu bottleneck
2) Check the Memory utilisation of the appserver and the entire server to see if there are memory bottleneck
3) Check the systemout.log to check if there are any hung threads or OOM
4) Check the heap allocated to the JVM
5) Check the connection to the DB , it could be possible that the connections to the DB has got MAXED out
6) Check the Plugin and webserver logs to see if its connection to has maxed
7) Take the thread dump for further analysis
these would be some of the steps to take
Helpfull info WAS-1
1. There is an application running on cluster. Both the app and JVMs are UP and running fine. But when we hit the URL for accessing the application, it is showing page can not be displayed. what is the troubleshooting steps to resolve the issue. I checked the systemout.log and systemerr.log but couldnt find anyhting helpful.
The Flow of Request happens from
Load Balancer >> Webserver >> Appserver >> DB
so the investigation for such issues should also follow the same routes .. ( well thats how i follow )
1) Try to check if the url is responding frm your end. ( this will eliminate if its specific to a user or its error for every one)
2) Check the Webserver if its running or not , If its not running start it
3) Check if the application server and application is running .. if not start it
4) Try to access the Aplication directly from the Appserver .. ie using the http transport port. ie wc_defalut .. ( this will identify if the error is due to App server or is with the Webserver/plugin)
5) check for the errors in the Appserver and web server logs
6) Check for the config in the plugin file to ensure that the url which is being hit is avaliable in the plugin-xml.cfg ( If it is not then it could be possible that the plugin was not regeneraed ad propagated after the deployment)
6) Enable the trace in the plugin-xml.cfg to understand in the plugin logs whether the plugin is forwarding the req to the appservers or not .
7) Lastly also check if the page which you are requesting is avaliable within the web modules
The Flow of Request happens from
Load Balancer >> Webserver >> Appserver >> DB
so the investigation for such issues should also follow the same routes .. ( well thats how i follow )
1) Try to check if the url is responding frm your end. ( this will eliminate if its specific to a user or its error for every one)
2) Check the Webserver if its running or not , If its not running start it
3) Check if the application server and application is running .. if not start it
4) Try to access the Aplication directly from the Appserver .. ie using the http transport port. ie wc_defalut .. ( this will identify if the error is due to App server or is with the Webserver/plugin)
5) check for the errors in the Appserver and web server logs
6) Check for the config in the plugin file to ensure that the url which is being hit is avaliable in the plugin-xml.cfg ( If it is not then it could be possible that the plugin was not regeneraed ad propagated after the deployment)
6) Enable the trace in the plugin-xml.cfg to understand in the plugin logs whether the plugin is forwarding the req to the appservers or not .
7) Lastly also check if the page which you are requesting is avaliable within the web modules
Monday, November 22, 2010
Thursday, November 4, 2010
Migrating a Version 5.x managed node to a Version 6.0.x managed node
Use the migration tools to migrate WebSphere Application Server Version 5.x managed nodes to Version 6.0.x managed nodes.
Before you begin
Migrating a Version 5.x managed node to a Version 6.0.x managed node requires that you first migrate the Version 5.x deployment manager node to a Version 6.0.x deployment manager node. Migrating Network Deployment Version 5.x to Version 6.0.x is described in Migrating from Network Deployment Version 5.x to a Version 6.0.x deployment manager.Before starting the migration of a managed node from Version 5.x to Version 6.0.x, you must create a Version 6.0.x profile for either a standalone application server or a managed node. If you create a Version 6.0.x managed node profile, do not federate the node before migration. The migration tools federate the Version 6.0.x node during migration.
The migration procedure is the same for either type of Version 6.0.x profile, but he end result can vary slightly. Each standalone application server has a server1 application server process. A Version 5.x managed node might not have a server1 process. This article describes migrating to a Version 6.0.x managed node that has not been federated.
Over time, migrate each Version 5.x managed node in the Version 6.0.x cell to a Version 6.0.x managed node. After migrating all Version 5.x managed nodes, use the convertScriptCompatibility script to change the deployment manager from a node that supports backward compatibility of Version 5.x administration scripts to a node that supports only Version 6.0.x.
Procedure
- Perform a typical or custom Version 6.0.x installation.
- Migrate the Version 5.x deployment manager node to Version 6.0.x as described in Migrating from Network Deployment Version 5.x to a Version 6.0.x deployment manager.
- Collect the following information about your Version 5.x installation before you begin this procedure. The Migration wizard prompts you for the following information during the migration:
- Installation root directory See WASPreUpgrade command for a description of the currentWebSphereDirectory parameter.
- Replace port values for virtual hosts and Web containers? Specifying “true” causes any ports of matching VirtualHosts to first be removed from the new configuration before adding the new values.
Specifying “false” for this parameter will just add new port values.
See WASPostUpgrade command for a description of the replacePorts parameter.
- Backup directory nameSee WASPreUpgrade command for a description of the backupDirectory parameter.
- Target profile name.See WASPostUpgrade command for a description of the profileName parameter.
- Installation root directory See WASPreUpgrade command for a description of the currentWebSphereDirectory parameter.
- Use the Version 6.0.x Profile creation wizard to create a managed node, but do not federate the node. The Version 6.0.x node must have the same node name as the Version 5.x node.Note: You can migrate a Version 5.x node without stopping it, but it is not necessary for it to be running for you to migrate its configuration. The migration tools can retrieve all the configuration data while the node is either running or stopped. You must stop the Version 5.x node before you can start the Version 6.0.x node that you are installing, however, so it makes sense to stop it now.
- Use the Migration wizard to migrate the Version 5.x managed node to the Version 6.0.x managed node profile as described in Migrating a Version 5.x application server to a Version 6.0.x standalone application server with the Migration wizard. The Migration wizard copies the configuration and applications from the Version 5.x managed node to the Version 6.0.x managed node. After migrating all of the data, the Migration wizard federates the Version 6.0.x managed node into the deployment manager cell.Note: When migrating managed nodes from Versions 5.0 through 5.1.0 to Version 6.0.x, there is a custom property of which you should be aware: com.ibm.websphere.ObjectIDVersionCompatibility. It might be possible to gain performance benefits after the entire cell is migrated to Version 6.0.x.
- Migrate as many Version 5.x managed nodes as you intend to migrate by using the following procedure.
- Determine the node name of the Version 5.x managed node.
- Use the Profile creation wizard to create a Version 6.0.x managed node, but do not federate the node.
- Use the Migration wizard to migrate the Version 5.x managed node to the Version 6.0.x managed node.
Note: For migration to be successful, you must use the same node names and cell names for each node from Version 5.x to Version 6.0.x. - If you chose the compatibility option (which is the default), and if all of your nodes are completely migrated to Version 6.0.x, run the convertScriptCompatibility script to remove backward compatibility from the Version 6.0.x deployment manager.
- Issue the convertScriptCompatibility command from the bin directory.
-
./app_server_root/bin/convertScriptCompatibility.sh -
app_server_root\bin\convertScriptCompatibility.bat
-
- Issue the convertScriptCompatibility command from the bin directory.
What to do next
Occasionally (after rebooting an application server machine for example), you must restart the nodeagent server on the application server node by running the startNode command from the profile_name/bin directory. To keep your application server nodes running without having to access the bin directory of each one, use the operating system to monitor and restart the nodeagent process on each application server node. (You can also set up the dmgr server as a managed process on the deployment manager node.)Adding a node automatically issues the startNode command for the node.
Note: When a deployment manager is migrated, the applications in the cell are reinstalled. Even though the name is unchanged, the application is different from the version that was deployed on the previous release. When the federated nodes synchronize with the migrated deployment manager, therefore, they detect the new application and download it. After the application has been downloaded (synchronized), the node agent uses the new application rather than the old application. If the application is running on any active servers, the application will appear to restart as the old application is stopped and uninstalled and the new application is installed and started
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/tins_increment.html
Subscribe to:
Comments (Atom)