It was difficult to find the captioned document (specially for Cluster) from IBM on the web for me so I have added it my blog to make it available easily to whom it may concern:
Here is the link for the document for CF installation for a clustered environment:
REMEMBER: CF installation steps are not same as a Fixpack installation on Cluster. If you do this mistake you will end up with failure:
Here is the link :
http://www-01.ibm.com/support/docview.wss?rs=688&uid=swg27014974
You can also follow instructions given below (copied from IBM link provided above):
What is new with Fix Pack 6.1.0.2
This fix pack updates the IBM WebSphere Portal 6.1 (6.1.0.0) and 6.1.0.1 levels to the 6.1.0.2 service release level.
This fix pack and these instructions can be used to upgrade the
IBM Web Content Manager 6.1 (6.1.0.0) and 6.1.0.1 levels to the 6.1.0.2
service release level.
Warning:
You must use
the Portal Update Installer
with the release date of 10 March, 2009 (3/10/2009) or later in order
to successfully install this fix pack. Earlier releases of the Update
Installer will result in a failure to restart the server after the
upgrade.
Special note: This fix pack and these instructions can also be
used for a WebSphere Portal Express - Idle Standby deployment. However,
the terminology used in this document is different. IBM WebSphere
Portal Express is licensed for use in a single-server configuration and
may not be used in either a cloned or a clustered configuration except
when you implement Idle Standby for purposes of failover. Implementing
the Idle Standby functionality requires purchase of a separate WebSphere
Portal Express Idle Standby License Option.
The following items are included in this fix pack:
The complete list of fixes integrated into this fix pack is found on the "Fix List" tab of this document.
Back to top
About Fix Pack 6.1.0.2
Installing Fix Pack 6.1.0.2 with the WebSphere Portal Update
Installer for version 6.1 or version 6.1.0.1 raises the fix level of
your product to version 6.1.0.2.
Refer to the installation instructions in the
Steps for installing Fix Pack 6.1.0.2 section for information.
Fix packs can be downloaded from the
version 6.1.0.2 download page, also linked above on the "Download" tab of this document.
Back to top
Space requirements
Space requirements vary depending on what you are installing. The size of the download is available on the
version 6.1.0.2 download page.
After unpacking the archive file you download, delete the archive file
to free space. Space is also required for backup files in the
portal_server_root/version/backup directory and your system temp
directory, such as /tmp on Unix or Linux platforms or C:\temp on the
Microsoft Windows platform. The space required is about the same as the
size of the fix pack which varies by product and platform.
This archived fix pack file is approximately 380MB in size. The
temporary disk space used will vary depending on your platform and could
be as much as 800MB during installation, and as much as 400MB after
installation into your <Portal_Install_Root> directory. For
<wp_profile> temporarily disk space at least 600 MB spaces
available.
Verify that the free space is available
before beginning the installation.
Back to top
Cluster upgrade planning
There are two options for performing upgrade in a clustered
environment. One option is to upgrade the cluster while the entire
cluster has been taken offline from receiving user traffic. The upgrade
is performed on every node in the cluster before the cluster is brought
back online to receive user traffic. This is the recommended approach
for an environment with multiple Portal clusters since 24x7 availability
can be maintained. Please see the following document for details:
Multiple Cluster Setup with WebSphere Portal
It is also the simplest approach to use in a single cluster
environment if maintenance windows allow for the Portal cluster to be
taken offline.
For single cluster environments, which cannot tolerate the
outage required to take a cluster offline and perform the upgrade, you
can utilize the single-cluster 24x7 availability process. Review the
following requirements and limitations for performing product upgrades
while maintaining 24x7 availability in a single cluster (
NOTE: Ensure that you understand this information before upgrading your cluster):
Assumptions for maintaining 24x7 operation during the upgrade process:
- If you want to preserve current user sessions during the upgrade
process, make sure that WebSphere Application Server distributed
session support is enabled to recover user session information when a
cluster node is stopped for maintenance. Alternatively, use monitoring
to determine when all (or most) user sessions on a cluster node have
completed before stopping the cluster node for upgrade to minimize the
disruption to existing user sessions.
- Load balancing must be enabled in the clustered environment.
- The cluster has at least two horizontal cluster members.
- Limitations on 24x7 maintenance:
- If you have not implemented horizontal scaling and have
implemented only vertical scaling in your environment such that all
cluster members reside on the same node, the fix pack installation
process will result in a temporary outage for your end users due to a
required restart. In this case, you will be unable to upgrade while
maintaining 24x7 availability.
- If you have a single local Web server in your environment,
maintaining 24x7 availability during the cluster upgrade may not be
possible since you might be required to stop the Web server while
applying corrective service to the local WebSphere Application Server
installation.
- When installing the fix pack in a clustered environment, the
portlets are only deployed when installing the fix pack on the primary
node. The fix pack installation on secondary nodes simply synchronizes
the node with the deployment manager to receive updated portlets. During
the portlet deployment on the primary node, the database will be
updated with the portlet configuration. This updated database, which is
shared between all nodes, would be available to secondary nodes before
the secondary nodes receive the updated portlet binary files. It is
possible that the new portlet configuration will not be compatible with
the previous portlet binary files, and in a 24x7 production environment
problems may arise with anyone attempting to use a portlet that is not
compatible with the new portlet configuration. Therefore it is
recommended that you test your portlets before upgrading the production
system in a 24x7 environment to determine if any portlets will become
temporarily unavailable on secondary nodes during the time between the
completion of the fix pack installation on the primary node and the
installation of the fix pack on the secondary node.
- In order to maintain 24x7 operations in a clustered
environment, it is required that you stop WebSphere Portal on one node
at a time and upgrade it. It is also required that during the upgrade of
the primary node, you manually stop node agents on all other cluster
nodes that continue to service user requests. Failure to do so may
result in portlets being shown as unavailable on nodes having the node
agent running.
- When uninstalling the fix pack in a clustered environment, the
portlets are only redeployed when uninstalling the fix pack on the
primary node. The fix pack uninstall on secondary nodes simply
synchronizes the node with the deployment manager to receive updated
portlets. During the portlet redeployment on the primary node, the
database will be updated with the portlet configuration, which would be
available to secondary nodes before the secondary nodes receive the
updated binary files, since all nodes share the same database. It is
recommended that you test your portlets before uninstalling on the
production system in a 24x7 environment because the possibility of such
incompatibility might arise if the previous portlet configuration is not
compatible with the new portlet binary files.
Back to top
Steps for installing Fix Pack 6.1.0.2 (single-cluster 24x7 procedure)
Before you begin:
Familiarize yourself with the Portal Upgrade Best Practices available from IBM Remote Technical Support for WebSphere Portal.
Portal Upgrades: Best Practices
1. Perform the following steps before upgrading to Version 6.1.0.2:
a. Review the
supported hardware and software requirements
for this cumulative fix. If necessary, upgrade all hardware and
software before applying this cumulative fix. If updates are required to
WebSphere Application Server level, perform that update first on the
Deployment Manager (as described in the next step). Instructions are
also provided to install WebSphere Application Server updates on each
node in the cluster during the time that node is taken offline from
receiving user traffic.
NOTE: You can download the latest WebSphere Application Server interim fixes from
http://www.ibm.com/software/webservers/appserv/was/support/.
b. If necessary in a clustered environment,
upgrade the IBM WebSphere Application Server on the deployment manager.
Perform the following steps to upgrade the deployment manager; NOTE: If security is not enabled, exclude the -user and -password parameters from the command:
i. Run the following command from the nd_profile_root/bin directory to stop the deployment manager:
- Windows: stopManager.bat -user was_admin_userid -password was_admin_password
- Unix/Linux: ./stopManager.sh -user was_admin_userid -password was_admin_password
- i5/OS: stopManager -profileName dmgr_profile -user was_admin_userid -password was_admin_password
ii. Upgrade the deployment manager to the version of the
WebSphere Application Server required to support the fix pack, including
any interim fixes and fix packs.
iii. Copy the following file from the Primary Node to AppServer/plugins on the deployment manager:
iv. Run the following command from the nd_profile_root/bin directory to start the deployment manager:
- Windows: startManager.bat
- Unix/Linux: ./startManager.sh
- i5/OS: startManager -profileName dmgr_profile
c. Verify that the information in the wkplc.properties,
wkplc_dbtype.properties, and wkplc_comp.properties files are correct on
each node in the cluster:
- Enter a value for the PortalAdminPwd and WasPassword parameters in the wkplc.properties file.
- Ensure that the value of the XmlAccessPort property in
wkplc_comp.properties matches the value of the port used for HTTP
connections to the WebSphere Portal server. NOTE: If you are
using Microsoft Internet Protocol (IP) Version 6 and you have specified
the WpsHostName property as an Internet Protocol address, normalize the
Internet Protocol address by placing square brackets around the IP
address as follows: WpsHostName=[my.IPV6.IP.address].
- The WebSphere Portal Update Installer removes plain text
passwords from the wkplc*.properties files. To keep these passwords in
the properties files, include the following line in the wkplc.properties
file: PWordDelete=false.
- Insure that the DbUser (database name) and DbPassword
(database password) parameters are defined correctly for all database
domains in the wkplc_comp.properties file.
d. Perform the following steps to download the fix pack and the WebSphere Portal Update Installer:
- Download the latest cumulative fix pack file and WebSphere Portal Update Installer from http://www.ibm.com/support/docview.wss?rs=688&uid=swg24022898.
- Create the portal_server_root/update directory and extract the
WebSphere Portal Update Installer file into this directory. NOTE on
Windows: The pkunzip utility might not correctly decompress the download
image so use another utility such as Winzip to unzip the image.
- Create the portal_server_root/update/fixpacks directory and copy the WP_PTF_6102.jar file into this directory.
- Warning: You must use the Portal Update Installer
with the release date of 10 March, 2009 (3/10/2009) or later in order
to successfully install this fix pack. Earlier releases of the Update
Installer will result in a failure to restart the server after the
upgrade.
e. If you plan to configure Computer Associates eTrust
SiteMinder as your external security manager to handle authorization and
authentication, the XML configuration interface may not be able to
access WebSphere Portal through eTrust SiteMinder. To enable the XML
configuration interface to access WebSphere Portal, use eTrust
SiteMinder to define the configuration URL (/wps/config) as unprotected.
Refer to the eTrust SiteMinder documentation for specific instructions.
After the configuration URL is defined as unprotected, only WebSphere
Portal enforces access control to this URL. Other resources, such as the
/wps/myportal URL, are still protected by eTrust SiteMinder. If you
have already set up eTrust SiteMinder for external authorization and you
want to use XML Configuration Interface (xmlaccess), make sure you have
followed the procedure to allow for xmlaccess execution.
f. For WebSphere Portal V6.1.0.1 with Process Server install of WPS PK JR32086 is required, otherwise the upgrade will fail.
2. Ensure that automatic synchronization is disabled on all
nodes to be upgraded, and stop the node agents. When the automatic
synchronization is enabled, the node agent on each node automatically
contacts the deployment manager at startup and then every
synchronization interval to attempt to synchronize the node's
configuration repository with the master repository managed by the
deployment manager. Because you must upgrade one node at a time to
maintain 24x7 availability, you should turn off automatic
synchronization to ensure that the nodes that are not yet upgraded do
not inadvertently get any updated enterprise applications prematurely.
- In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
- Click nodeagent for the required node.
- Click File Synchronization Service.
- Uncheck the Automatic Synchronization check box on the File Synchronization Service page to disable the automatic synchronization feature and then click OK.
- Repeat these steps for all other nodes to be upgraded.
- Click Save to save the configuration changes to the master repository.
- Select System Administration > Nodes in the navigation tree
- Select all nodes that are not synchronized, and click on Synchronize
- Select System Administration > Node agents in the navigation tree
- For the primary node, select the nodeagent and click Restart
- Select the nodeagents of all secondary nodes and click Stop
NOTE: Do not attempt to combine steps 3 and 4 together! The
update must be performed sequentially, not in parallel on all of the
server nodes in the cluster. Update the primary node first, then the
secondary node and then any subsequent nodes, one at a time, in
accordance with the below instructions.
3. Perform the following steps to upgrade WebSphere Portal on the primary node:
a. Stop IP traffic to the node you are upgrading:
- If you are using IP sprayers for load balancing to the cluster
members, reconfigure the IP sprayers to stop routing new requests to
the Portal cluster member(s) on this node.
- If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
- In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members
to obtain a view of the collection of cluster members. When using WAS
7.x (for the editions that support WAS 7.x see the hardware&software
requirements doc) use the following path: click Servers>Clusters>WebSphere application server clusters>cluster_name>Cluster members.
- Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
- Click Update to apply the change.
- If automatic plug-in generation and propagation is disabled,
manually generate and/or propagate the plugin-cfg.xml file to the Web
servers.
- Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval
property for the Web server plug-in (default value is 60 seconds). You
can check this value on the Deployment Manager administrative console by
selecting Servers>Web Servers>web_server_name>Plug-in Properties.
When using WAS 7.x (for the editions that support WAS 7.x see the
hardware&software requirements doc) use the following path: click Servers>Server Types>Web Servers>web_server_name>Plug-in Properties.
- If automatic propagation of the plug-in configuration file is
enabled on the web server(s) disable it from the Deployment Manager
administrative console by going to Servers>Web Servers>web_server_name>Plug-in Properties and unchecking Automatically propagate plug-in configuration file.
When using WAS 7.x (for the editions that support WAS 7.x see the
hardware&software requirements doc) use the following path: click Servers>Server Types>Web Servers>web_server_name>Plug-in Properties.
b. If necessary, perform the following steps to upgrade WebSphere Application Server on the node:
- Run the following command from the was_profile_root/bin directory to stop the node agent:
- Windows: stopNode.bat -user was_admin_userid -password was_admin_password
- Unix/Linux: ./stopNode.sh -user was_admin_userid -password was_admin_password
- i5/OS: stopNode -profileName dmgr_profile -user was_admin_userid -password was_admin_password
- Stop the application servers running on the node.
- Upgrade WebSphere Application Server on the node, including the required interim fixes for WebSphere Portal.
- Run the following command from the was_profile_root/bin directory to start the node agent:
- Windows: startNode.bat
- Unix/Linux: ./startNode.sh
- i5/OS: startNode -profileName profile_root
where profile_root is the name of the WebSphere Application Server profile where WebSphere Portal is installed; for example, wp_profile
c. Choose either the graphical user interface installation option or the command line installation option:
NOTE: If the installation fails, use the
IBM Support Assistant
to access support-related information and serviceability tools for
problem determination. For i5/OS, download ISA on a system other than
i5/OS. On the Support Assistant Welcome page, click Service. Then click
the Create Portable Collector link to create a remotely collect the data
from your i5/OS system. Fix what is causing the problem and then rerun
the installation task.
If using the Universal PUI, (which does not include the bundled
Java environment), run the following command, setupCmdLine.bat for
Windows or . ./setupCmdLine.sh for Unix/Linux from the
was_profile_root/bin directory to set up the Java environment for the
graphical user interface installation program.
e. Enter the following command to launch the graphical user interface installation program:
- Windows: portal_server_root\update> updatePortalWizard.bat
- Unix/Linux: portal_server_root/update> ./updatePortalWizard.sh
-- OR --
- Perform the following steps to launch the installation program from the command line:
- Users of a platform-specific WebSphere Portal Update
Installer (PUI) which includes the bundled Java runtime can skip this
initial step. If using the Universal PUI, (which does not include the
bundled Java environment), run the following command from the
was_profile_root/bin directory to set up the Java environment for the
WebSphere Portal Update Installer:
- Windows: setupCmdLine.bat
- Unix/Linux: . ./setupcmdline.sh
- Open a command prompt in the was_profile_root/bin directory
and enter the following command to check the status of all active
application servers:
- Windows: serverStatus.bat -all -user username -password password
- Unix/Linux: ./serverStatus.sh -all -user username -password password
- i5/OS: serverStatus -all -profileName profile_root -user username -password password
- Enter the following command to stop any active application servers:
- Windows: stopServer.bat servername -user username -password password
- Unix/Linux: ./stopServer.sh servername -user username -password password
- i5/OS: stopServer servername -profileName profile_root -user username -password password
- Verify that the deployment manager and node agent for the primary node are running. If they are stopped, start them.
- Enter the following command to launch the installation program (NOTE: Enter the command on one line):
- Windows: portal_server_root\update>
updatePortal.bat -install -installDir "C:\Program
Files\IBM\WebSphere\PortalServer" -fixpack -fixpackDir "C:\Program
Files\IBM\WebSphere\PortalServer\update\fixpacks" -fixpackID WP_PTF_6102
- Unix/Linux: portal_server_root/update>
./updatePortal.sh -install -installDir "/opt/WebSphere/PortalServer"
-fixpack -fixpackDir "/opt/WebSphere/PortalServer/update/fixpacks"
-fixpackID WP_PTF_6102
- i5/OS: portal_server_root_prod/update>
updatePortal.sh -install -installDir "/<portal_server_root_user>"
-fixpack -fixpackDir "/portal_server_root_prod/update/fixpacks"
-fixpackID WP_PTF_6102
g. After the fix pack is installed, check the status of the
node you are upgrading in the Deployment Manager administrative console.
If the status is
Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
a. In the Deployment Manager administrative console, click
System Administration>Nodes.
b. For the node with a status of Not Synchronized, click Synchronize.
c. After the synchronization is complete, wait
at least 20 minutes before performing the next step to insure that the
node agent EAR expansion process completes.
h. Restart the WebSphere_Portal server on the primary node.
i. Run the following task to activate the portlets:
- Windows: ConfigEngine.bat activate-portlets -DPortalAdminPwd=password -DWasPassword=password
- Unix/Linux: ./ConfigEngine.sh activate-portlets -DPortalAdminPwd=password -DWasPassword=password
- i5/OS: ConfigEngine.sh -profileName profile_root activate-portlets -DPortalAdminPwd=password -DWasPassword=password
j. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
k. Restore IP traffic to the node you upgraded:
a. If you are using IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
b. If you are using the Web server plug-in for
load balancing, perform the following steps to restore traffic to the
upgraded node:
i. If you previously disabled automatic propagation of the
Web server(s), re-enable it now using the Deployment Manager
administration console by going to
Servers
>
Web Servers>
web_server_name
>Plug-in Properties and checking
Automatically propagate plug-in configuration file . When
using WAS 7.x (for the editions that support WAS 7.x see the
hardware&software requirements doc) use the following path: click
Servers>Server Types>Web Servers>
web_server_name
>Plug-in Properties.
ii. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members
to obtain a view of the collection of cluster members. When using WAS
7.x (for the editions that support WAS 7.x see the hardware&software
requirements doc) use the following path: click Servers>Clusters>WebSphere application server clusters>cluster_name>Cluster members.
iii. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
iv. Click Update to apply the change.
v. If you are not using automatic generation
and propagation for the Web server plug-in, manually generate and/or
propagate the plugin-cfg.xml file to the Web servers.
l. If you preserved passwords during the installation and you
want to manually delete the passwords, open the wkplc.properties file
and include the following line in the file: PWordDelete=true. Then run
the following task from the portal_server_root/config directory to
delete the passwords:
- Windows: ConfigEngine.bat action-delete-passwords-fixpack
- Unix/Linux: ./ConfigEngine.sh action-delete-passwords-fixpack
- i5/OS: ConfigEngine.sh -profileName profile_root action-delete-passwords-fixpack
m. i5/OS only: Run the
CHGJOBD JOBD(QWAS61/QWASJOBD) LOG(4 00 *NOLIST) command to disable WebSphere Application Server informational logging.
n. If you upgraded from a prior release
with
additional interim fixes installed, you will need to reinstall any
interim fixes that were not integrated into the version 6.1.0.2
installation. Open the CheckAdditionalEfixes_
date_
time.log
file, located in the portal_server_root/version/log directory, for the
list of interim fixes that need to be reinstalled. Before reinstalling
the interim fix, go to the
WebSphere Portal product support page
to see if there is a newer version of the interim fix because these are
often specific to a version and release of the product; and search on
the APAR number to find more information.
NOTE: Do not attempt to upgrade secondary or subsequent nodes until
after completing the above Step 3 (Primary Node)! The update
must be performed sequentially, not in parallel on all of the server
nodes in the cluster. Update the primary node first, then the secondary
node and then any subsequent nodes, one at a time, in accordance with
the below instructions.
4. Perform the following steps to upgrade
WebSphere Portal on each secondary node after completing the upgrade on
the primary node:
a. Stop IP traffic to the node you are upgrading:
- If you are using IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
- If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
i. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members
to obtain a view of the collection of cluster members. When using WAS
7.x (for the editions that support WAS 7.x see the hardware&software
requirements doc) use the following path: click Servers>Clusters>WebSphere application server clusters>cluster_name>Cluster members.
ii. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
iii. Click Update to apply the change.
iv.
If automatic plug-in generation and propagation is disabled, manually
generate and/or propagate the plugin-cfg.xml file to the Web servers.
v. Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval
property for the Web server plug-in (default value is 60 seconds). You
can check this value on the Deployment Manager administrative console by
selecting Servers>Web Servers>web_server_name>Plug-in Properties.
When using WAS 7.x (for the editions that support WAS 7.x see the
hardware&software requirements doc) use the following path: click Servers>Server Types>Web Servers>web_server_name>Plug-in Properties.
b. If necessary, Upgrade WebSphere Application Server on the node, including the required interim fixes for WebSphere Portal.
c. Run the following command from the was_profile_root/bin directory to start the node agent:
- Windows: startNode.bat
- Unix/Linux: ./startNode.sh
- i5/OS: startNode -profileName profile_root
d. Choose either the graphical user interface installation option or the command line installation option:
NOTE: If the installation fails, use the
IBM Support Assistant
to access support-related information and serviceability tools for
problem determination. For i5/OS, download ISA on a system other than
i5/OS. On the Support Assistant Welcome page, click Service. Then click
the Create Portable Collector link to create a remotely collect the data
from your i5/OS system. Fix what is causing the problem and then rerun
the installation task.
NOTE: For optimal performance and minimal
network traffic when using the Portal Update Installer's graphical
Wizard interface, the upgrade steps should be run from local displays.
When using remote displays, it is recommended to use the command-line
interface with the Update Installer.
If using the
Universal PUI, (which does not include the bundled Java environment),
run the following command, setupCmdLine.bat for Windows or .
./setupCmdLine.sh for Unix/Linux from the was_profile_root/bin directory
to set up the Java environment for the graphical user interface
installation program.
Enter the following command to launch the graphical user interface installation program:
- Windows: portal_server_root\update> updatePortalWizard.bat
- Unix/Linux: portal_server_root/update> ./updatePortalWizard.sh
-- OR --
- Perform the following steps to launch the installation program from the command line:
- Users of a platform-specific WebSphere Portal Update
Installer (PUI) which includes the bundled Java runtime can skip this
initial step. If using the Universal PUI, (which does not include the
bundled Java environment), run the following command from the
was_profile_root/bin directory to set up the Java environment for the
WebSphere Portal Update Installer:
- Windows: setupCmdLine.bat
- Unix/Linux: . ./setupCmdLine.sh
- Open a command prompt in the was_profile_root/bin directory
and enter the following command to check the status of all active
application servers;
- Windows: serverStatus.bat -all -user username -password password
- Unix/Linux: ./serverStatus.sh -all -user username -password password
- i5/OS: serverStatus -all -profileName profile_root -user username -password password
- Enter the following command to stop any active application servers:
- Windows: stopServer.bat servername -user username -password password
- Unix/Linux: /stopServer.sh servername -user username -password password
- i5/OS: stopServer servername -profileName profile_root -user username -password password
- Verify that the deployment manager and node agent are running. If they are stopped, start them.
- Enter the following command to launch the installation program (NOTE: Enter the command on one line):
- Windows: portal_server_root\update> updatePortal.bat -install
-installDir "C:\Program Files\IBM\WebSphere\PortalServer"
-fixpack
-fixpackDir "C:\Program Files\IBM\WebSphere\PortalServer\update\fixpacks"
-fixpackID WP_PTF_6102
- Unix/Linux: portal_server_root/update> ./updatePortal.sh -install
-installDir "/opt/WebSphere/PortalServer"
-fixpack
-fixpackDir "/opt/WebSphere/PortalServer/update/fixpacks"
-fixpackID WP_PTF_6102
- i5/OS: portal_server_root_prod/update> updatePortal.sh -install
-installDir "/<portal_server_root_user>"
-fixpack
-fixpackDir "/portal_server_root_prod/update/fixpacks"
-fixpackID WP_PTF_6102
e. After the fix pack is installed, check the status of the
node you are upgrading in the Deployment Manager administrative console.
If the status is
Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
- In the Deployment Manager administrative console, click System Administration>Nodes.
i. For the node with a status of Not Synchronized, click Synchronize.
ii.
After the synchronization is complete, wait at least 20 minutes before
performing the next step to insure that the node agent EAR expansion
process completes
f. Restart the WebSphere_Portal server on the secondary node.
g. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
h. Restore IP traffic to the node you upgraded:
- If you are using IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
i.
If you are using the Web server plug-in for load balancing, perform the
following steps to restore traffic to the upgraded node:
- In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members
to obtain a view of the collection of cluster members. When using WAS
7.x (for the editions that support WAS 7.x see the hardware&software
requirements doc) use the following path: click Servers>Clusters>WebSphere application server clusters>cluster_name>Cluster members.
i. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
ii. Click Update to apply the change.
iii.
If automatic plug-in generation and propagation is disabled, manually
generate and/or propagate the plugin-cfg.xml file to the Web servers.
i. If you preserved passwords during the installation and you
want to manually delete the passwords, open the wkplc.properties file
and include the following line in the file: PWordDelete=true. Then run
the following task from the portal_server_root/config directory to
delete the passwords:
- Windows: ConfigEngine.bat action-delete-passwords-fixpack
- Unix/Linux: ./ConfigEngine.sh action-delete-passwords-fixpack
- i5/OS: ConfigEngine.sh -profileName profile_root action-delete-passwords-fixpack
j. i5/OS only: Run the CHGJOBD JOBD(QWAS61/QWASJOBD) LOG(4 00
*NOLIST) command to disable WebSphere Application Server informational
logging.
k. If you upgraded from a prior release
with
additional interim fixes installed, you will need to reinstall any
interim fixes that were not integrated into the version 6.1.0.2
installation. Open the CheckAdditionalEfixes_
date_
time.log
file, located in the portal_server_root/version/log directory, for the
list of interim fixes that need to be reinstalled. Before reinstalling
the interim fix, go to the
WebSphere Portal product support page
to see if there is a newer version of the interim fix because these are
often specific to a version and release of the product; and search on
the APAR number to find more information.
5. Perform the following post-cluster installation upgrade steps:
a. Re-enable automatic synchronization on all nodes in the cluster if you disabled it earlier.
- In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
- Click nodeagent for the required node.
- Click File Synchronization Service.
- Check the Automatic Synchronization check box on the File Synchronization Service page to enable the automatic synchronization feature and then click OK.
- Repeat these steps for all remaining nodes.
- Click Save to save the configuration changes to the master repository.
- Select System Administration > Nodes in the navigation tree
- Select all nodes that are not synchronized, and click on Synchronize
- Select System Administration > Node Agents in the navigation tree
- Select all node agents where automatic synchronization has been re-enabled and click Restart
b. Perform the following steps if you are using IBM Web
Content Manager and you created content on the release you upgraded
from:
i. Redeploy your customization, including JSPs, to the Web
Content Manager enterprise application and the local rendering portlet.
c. Optional: The WebSphere Portal 6.1.0.2 fix pack does not update any of the business portlets or
Web Clipping portlet as these are served from
the IBM WebSphere Portal Business Solutions catalog.
If this fix pack is updating a fresh installation of WebSphere Portal,
you should download the latest available portlets and portlet
applications from the Portal Catalog. If you already have the version of
the business portlets or Web Clipping portlet you need or if you are
not using these functions at all, no additional steps are necessary.
e. When using or planning to use remote search
on Portal 6102, PK84560 should be installed before starting to use
remote search or updating the remote search files.
g. Copy the following two files:
- <PortalServer_root>/base/wp.dynamicui.app/installableApps/dynamicui_transformation.war
to <PortalServer_root>/installableApps/
- <PortalServer_root>/base/wp.dynamicui.app/installableApps/tpl_transformation.war
to <PortalServer_root>/installableApps/
Run the following two tasks from the <wp_profile_root>/ConfigEngine directory:
- Windows: ConfigEngine.bat action-deploy-transformation-dynamicui-wp.dynamicui.config
- Windows: ConfigEngine.bat action-deploy-transformation-tpl-wp.dynamicui.config
- Unix/Linux: ./ConfigEngine.sh action-deploy-transformation-dynamicui-wp.dynamicui.config
- Unix/Linux: ./ConfigEngine.sh action-deploy-transformation-tpl-wp.dynamicui.config
- i5/OS: ConfigEngine.sh -profileName profile_rootaction-deploy-transformation-dynamicui-wp.dynamicui.config
- i5/OS: ConfigEngine.sh -profileName profile_rootaction-deploy-transformation-tpl-wp.dynamicui.config