Quantcast
Channel: Mandalika's scratchpad
Viewing all 115 articles
Browse latest View live

Oracle RDBMS : Generic Large Object (LOB) Performance Guidelines

$
0
0

This blog post is generic in nature and based on my recent experience with a content management system where securefile BLOBs are critical in storing and retrieving the checked in content. It is stro ngly suggested to check the official documentation in addition to these brief guidelines. In general, Oracle Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) is a good starting point when creating tables involving SecureFiles and LOBs.

Guidelines

  • Tablespace: create the LOB in a different tablespace isolated from the rest of the database
  • Block size: consider larger block size (default 8 KB) if the expected size of the LOB is big
  • Chunk size: consider larger chunk size (default 8 KB) if larger LOBs are expected to be stored and retrieved
  • Inline or Out-of-line: choose "DISABLE STORAGE IN ROW" (out-of-line) if the average LOB size is expected to be > 4 KB. The default inlining is fine for smaller LOBs
  • CACHE or NOCACHE: consider bypassing the database buffer cache (NOCACHE) if large number of LOBs are stored and not expected to be retrieved frequently
  • COMPRESS or NOCOMPRESS: choose COMPRESS option if storage capacity is a concern and a constraint. It saves disk space at the expense of some performance overhead. In a RAC database environment, it is recommended to compress the LOBs to reduce the interconnect traffic
  • De-duplication: by default, duplicate LOBs are stored as a separate copy in the database. Choosing DEDUPLICATE option enables sharing the same data blocks for similar files thus reducing storage overhead and simplifying storage management
  • Partitioning: consider partitioning the parent table to maximize application performance. Hash partitioning is one of the options if there is no potential partition key in the table
  • Zero-Copy I/O protocol: turned on by default. Turning it off in a RAC database environment could be beneficial. Set the initialization parameter _use_zero_copy_io=FALSE to turn o ff the Zero-Copy I/O protocol
  • Shared I/O pool: database uses the shared I/O pool to perform large I/O operations on securefile LOBs. The shared I/O pool uses shared memory segments. If this pool is not large enough or if there is not enough memory available in this pool for a securefile LOB I/O operation, Oracle uses a portion of PGA until there is sufficient memory available in the shared I/O pool. Hence it is recommended to size the shared I/O pool appropriately by monitoring the database during the peak activity. Relevant initialization parameters: _shared_io_pool_size and _shared_iop_max_size

Also see:
Oracle Database Documentation : LOB Performance Guidelines


Siebel Connection Broker Load Balancing Algorithm

$
0
0

Siebel server architecture supports spawning multiple application object manager processes. The Siebel Connection Broker, SCBroker, tries to balance the load (incoming requests) across different object manager processes running in a single Siebel server.

Least Loaded or Round Robin?

By default, SCBroker forwards the incoming request to any object manager process that is least loaded - meaning the process with the least number of running tasks. In Siebel terminology, this behavior is referred as "least-loaded" or "LL" connection forwarding algorithm. While the default LL algorithm provides the optimal behavior in the best case scenarios, it may lead to serious availability problems if one of several object manager prcesses running in a Siebel server stops responding in a timely fashion [for some reason]. Such an object manager may still accept requests though it may timeout. At some point, the unresponsive/hung or erroneous object manager will have the least number of tasks that may prompt SCBroker component to forward new incoming requests to that object manager process - which in turn leads to a stalemate. To avoid such situations, it is recommended to configure "round-robin" or "RR" algorithm in SCBroker component. When round-robin algorithm is configured, SCBroker ignores the number of running tasks per object manager process and routes all requests to all object managers in a round robin fashion.

While both algorithms have their strengths and weaknesses, customers must weigh both options and choose the one that fits best in their deployment.

eg.,

Find the current load balancing algorithm:


srvrmgr> list advanced param ConnForwardAlgorithm for comp SCBroker show PA_ALIAS, PA_VALUE, PA_NAME

PA_ALIAS PA_VALUE PA_NAME
-------------------- -------- -----------------------------------------
ConnForwardAlgorithm LL Connection Forward algorithm for SCBroker

Configure SCBroker to use round-robin algorithm:


srvrmgr> change param ConnForwardAlgorithm=RR for comp SCBroker server SERVER_NAME
Command completed successfully.

srvrmgr> list advanced param ConnForwardAlgorithm for comp SCBroker show PA_ALIAS, PA_VALUE, PA_NAME

PA_ALIAS PA_VALUE PA_NAME
-------------------- -------- -----------------------------------------
ConnForwardAlgorithm RR Connection Forward algorithm for SCBroker

Other SCBroker parameters of interest: ConnForwardTimeout and ConnRequestTimeout

Oracle Database on NFS : Resolving "ORA-27086: unable to lock file - already in use" Error

$
0
0

Some Context

Oracle database was hosted on ZFS Storage Appliance (NAS). The database files are accessible from the database server node via NFS mounted filesystems. Solaris 10 is the operating system on DB node.

Someone forgets to shutdown the database instance and unmount the remote filesystems before rebooting the database server node. After the system boots up, Oracle RDBMS fails to bring up the database due to locked-out data files.

eg.,


SQL> startup
ORACLE instance started.

Total System Global Area 1.7108E+10 bytes
Fixed Size 2165208 bytes
Variable Size 9965671976 bytes
Database Buffers 6845104128 bytes
Redo Buffers 295329792 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/orclvol4/entDB/system01.dbf'


======================
Extract from alert log:
======================

...
ALTER DATABASE OPEN
Fri Aug 05 21:30:54 2011
Errors in file /oracle112/diag/rdbms/entdb/entDB/trace/entDB_dbw0_7235.trc:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/orclvol4/entDB/system01.dbf'
ORA-27086: unable to lock file - already in use
SVR4 Error: 11: Resource temporarily unavailable
Additional information: 8
Additional information: 21364

Errors in file /oracle112/diag/rdbms/entdb/entDB/trace/entDB_dbw0_7235.trc:
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/orclvol4/entDB/sysaux01.dbf'
ORA-27086: unable to lock file - already in use
SVR4 Error: 11: Resource temporarily unavailable
Additional information: 8
Additional information: 21364
...

Reason for the lock failure:

Because of the sudden ungraceful shutdown of the database, file locks on data files were not released by the NFS server (ZFS SA in this case). NFS server held on to the file locks even after the NFS client (DB server node in this example) was restarted. Due to this, Oracle RDBMS is not able to lock those data files residing on NFS server (ZFS SA). As a result, database instance was failed to start up in exclusive mode.

Workaround

Manually clear the NFS locks as outlined below.

On NFS Client (database server node):

  1. Shutdown the mounted database
  2. Unmount remote (NFS) filesystems
  3. Execute: clear_locks -s <nfs_server_host>

    eg.,


    # clear_locks -s sup16
    Clearing locks held for NFS client ipsedb1 on server sup16
    clear of locks held for ipsedb1 on sup16 returned success

On NFS Server (ZFS SA):
    (this step may not be necessary but wouldn't hurt to perform)

  1. Execute: clear_locks <nfs_client_host>

    eg.,


    sup16# clear_locks 10.129.207.93
    Clearing locks held for NFS client 10.129.207.93 on server sup16
    clear of locks held for 10.129.207.93 on sup16 returned success

Again back on NFS Client (database server node):

  1. Restart NFS client
        (this step may not be necessary but wouldn't hurt to perform)

    # svcadm -v disable nfs/client
    # svcadm -v enable nfs/client
  2. Mount remote/NFS filesystems
  3. Finally start the database

Also see:
Listing file locks on Solaris 10

Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

$
0
0

Symptom:

A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following:


Message: GEN-13,
Additional Message: dict-ERR-1109:
Unable to read value from export file (Data length (32) > Column definition (3)).

Message: GEN-13,
Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ).

GenericLog GenericError 1 0002157.. 11-11-18 13:28 Message: Generated SQL statement:,
Additional Message: SQLFetch:
SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY,
'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH,
RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS
FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ
WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?)
AND DOBJ.ROW_ID = RDOBJ.DOCK_ID
AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL)
AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL)

Message: Error: An ODBC error occurred,
Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch

Message: GEN-13,
Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)).

Message: GEN-13,
Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ).

Message: GEN-10,
Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects

Message: GEN-10,
Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo

GenericError
(srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
SmiLayerLog Error Terminate process due to unrecoverable error: 3006. (Main Thread)

An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause.

Solution:

  • Stop the application server and manually kill the remaining Siebel application specific processes

    eg.,


    stop_server all

    pkill siebmtsh
    pkill siebproc
    ..
  • Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup

  • Start the application server

    start_server all

Solaris Tip: Resolving "statd: cannot talk to statd at , RPC: Timed out(5)"

$
0
0

Symptom:

System log shows a bunch of RPC timed out messages such as the following:



Dec 13 09:23:23 gil08 last message repeated 1 time
Dec 13 09:29:14 gil08 statd[19858]: [ID 766906 daemon.warning] statd: cannot talk to statd at ssc23, RPC: Timed out(5)
Dec 13 09:35:05 gil08 last message repeated 1 time
Dec 13 09:40:56 gil08 statd[19858]: [ID 766906 daemon.warning] statd: cannot talk to statd at ssc23, RPC: Timed out(5)
..

Those messages are the result of an apparent communication failure between the status daemons (statd) of both local and remote hosts using RPC calls.

Workaround/Solution:

If the target_host is reachable, execute the following to stop the system from generating those warning messages --- stop the network status monitor, remove the target host entry from /var/statmon/sm.bak file and start the network status monitor process. Removing the target host entry from sm.bak file keeps that machine from being aware that it may have to participate in locking recovery.

eg.,


# ps -eaf | fgrep statd
daemon 14304 19622 0 09:47:16 ? 0:00 /usr/lib/nfs/statd
root 14314 14297 0 09:48:03 pts/15 0:00 fgrep statd

# svcs -a | grep "nfs/status"
online 9:52:41 svc:/network/nfs/status:default

# svcadm -v disable nfs/status
svc:/network/nfs/status:default disabled.

# ls /var/statmon/sm.bak
ssc23

# rm /var/statmon/sm.bak/ssc23

# svcadm -v enable nfs/status
svc:/network/nfs/status:default enabled.

Oracle Application Testing Suite (OATS): Few Tips & Tricks

$
0
0
OATS is a suite of applications that can be used for performance and scalability testing, functional and regression testing. It is a thin client application that runs within a web browser - so, it is easy to use the tool from anywhere as long as the web server running on the host node is accessible. Hopefully the following tips and tricks will benefit some of the users of the Oracle Application Testing Suite.
Few technical details first - OATS is a 32-bit Java application that runs in a WebLogic container (WLS) with Oracle XE database being the backend store for test session data.


[Trick] Issue : OATS software fails to install on 64-bit Windows systems
Resolution:
Download and install 64-bit .NET framework manually before installing the OATS software. Look for .NET framework on Microsoft's downloads website.



[Trick] Issue : OATS software fails to install on systems with large number of [virtual] CPUs
Resolution:
On systems with many cores/vCPUs, Oracle database in general requires large amounts of memory to be configured for SGA - so, one solution would be to allocate as much memory as required. However Oracle XE limits the memory utilization within the database to 1 GB. Besides, XE uses only one CPU even if there are multiple CPUs available on a system. Hence one workaround is to limit the number of vCPUs that the system exposes during the installation of OATS software. The steps are shown below.
  • Start button -> Run -> type "msconfig"
  • Click on Boot tab -> Advanced Options
  • Check "Number of processors" and set appropriate value (I believe we can go up to 16)
  • Reboot Windows
  • Uninstall failed OATS installation and try installing again
  • Undo the above made changes after the successful installation of OATS
  • Reboot Windows one final time
Thanks to my colleague Bao Doan for providing this workaround.



[Trick] Issue : During runtime, OATS drive the load and executes the test as expected but fails to collect runtime statistics
Resolution:
This is another limitation of Oracle XE database. Until 10g, XE limits the maximum amount of user data in the database to 4 GB. This limit was raised to 11 GB in release Oracle 11g XE. OATS 9.x releases bundle Oracle 10g XE. To take advantage of the larger limit for data, install Oracle 11g XE manually before installing OATS software. OATS installer gives the option to use an existing installation of Oracle XE. Besides, it is not possible to have multiple Oracle XE installations on a single box anyway (that's another XE limitation).
For existing installations, one workaround is to remove old and unwanted sessions to make room for new sessions in the database. Listed below are the steps.
  • Connect to the Oracle Load Testing (OLT) tool
  • Click on "Manage" top-level menu (upper right corner) -> Sessions
  • Click on any unwanted session and press "Delete" button (I recommend deleting one session at a time)



[Trick] Issue : Under load, there are many network timeouts with ton of sockets in TIME_WAIT state on OATS agent systems including the OATS Controller node
Resolution:
Tune TCP/IP parameters on Windows as shown below.
  • Launch Windows registry
  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIP\parameters
  • Configure the following two parameters. If not found, create those parameters by selecting Edit -> New -> DWORD Value from the menu bar. Select "Decimal" under Base.
      TcpTimedWaitDelay : 30 [seconds] MaxUserPort : 65534
  • Reboot Windows
Thanks to my colleagues Dino and Vishnu for sharing this workaround.



[Trick] Issue : OATS Controller does not show any graphs or analysis reports
Resolution:
Install Adobe Flash Plugin and try again.



[Trick] Issue : Under load, OATS Controller stops collecting runtime statistics at some random point
Resolution:
Check Oracle database alert log for some clue(s). If there is an error message such as "ORA-12516: TNS:listener could not find available handler with matching protocol stack", connect to the database, query v$resource_limit view and compare the values reported under CURRENT_UTILIZATION and MAX_UTILIZATION for the resource "processes". If the current utilization is pretty close to the configured maximum value, raise the value for processes parameter in [S]PFILE.



[Tip]Balancing the load among multiple OATS agent systems
One simple way is to create a VU Agent System Group based on the available agent systems. Steps listed below.
  • Connect to the Oracle Load Testing (OLT) tool
  • Click on "Manage" top-level menu (upper right corner) -> Systems
  • Click on "VU Agent System Group" in the left hand side
  • On the right hand side, click on "New" option
  • Select all the agent systems that you want to be part of the "VU Agent System Group"
  • Finally name the newly created system group and save
Note that it is not possible to attach weights to the agent systems - so, it is suggested to have agent systems with similar hardware configurations in the VU Agent System Group.


[Tip]Balancing the load among multiple web servers using OATS Controller
If there are multiple web server instances running in a enterprise application deployment; and OATS software is being used to test the performance and scalability of the application, parameterizing the web server hostname and port number in OATS test script will take care of the web server load balancing problem. Of course there are many alternatives to this approach such as using a hardware load balancer, using web server Reverse Proxy etc.,


[Added on 01/19/2012]

[Tip]How-To check the available space in USERS tablespace?
Run the following on OATS Controller node:
Start -> All Programs -> Oracle Database XX Express Edition -> Run SQL Command Line
SQL> connect / as sysdba

SQL> SELECT /* + RULE */ df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
fs.bytes / (1024 * 1024),
SUM(df.bytes_free) / (1024 * 1024),
Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,bytes_free,bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
ORDER BY 4 DESC;
Copy/paste the above SQL code in a text file with sql extension and execute that SQL statement by calling the SQL script from SQL> command prompt. eg., assuming the above code was saved in a plain text file called chktblspcusg.sql under C:\ drive, execute the SQL script as shown below:
SQL> @C:\chktblspcusg.sql



[Added on 06/27/2012]

[Trick] Issue : An attempt to open a test script in OpenScript fails with error
'Failed to open script' has encountered a problem.
Failed to open . See error log for details.

Clicking on "Details" button provides the following clue.

The project description file (.project) for '' is missing"

In addition the title bar shows "Relocating Eclipse Projects: The project description file (.project) for XXX is missing".

Resolution:
Navigate to C:\Documents and Settings\Administrator\osworkspace\.metadata\.plugins\org.eclipse.core.resources\.projects\

Look for the directory by name "<failing_script_name>" and remove it



[Added on 08/03/2012]

[Trick] Issue: Unexpected Agent exit. Code = 51 in the middle of an OLT load test

When running a load scenario in Oracle Load Testing (OLT) that uses a databank, the scenario runs fine for some time and then all of a sudden fails with the following error: Unexpected Agent exit. Code = 51.

Workaround:

The following settings may alleviate the issue.
  • - toggle/experiment with the settings for "Clear cache between iterations" and "Clear cache before playing back"
    • those settings can be found under the test script preferences -> Playback -> Web Functional -> Miscellaneous
  • - experiment with different values for "Maximum users per process" setting
    • this setting is under OLT -> Configure all parameters -> Advanced
  • - increase the Java heap size (both min & max) in file <OATS_HOME>\agentmanager\bin\AgentManagerService.conf
    • default values: min heap size: 16 MB; max heap size: 64 MB
Contributors: John Snyder, Richard Barry

[Added 02/25/13]

Another colleague Dave Suri has an alternate tip to resolve the Agent 51 issue.

Edit <OATS_HOME>\agentmanager\processDescriptors\JavaAgent.properties

Change the following lines:


#process.debug=y
#process.debug.suspend=y
#process.debug.port=8123
#process.debug.custom=

To:


process.debug=y
#process.debug.suspend=y
#process.debug.port=8123
process.debug.custom=-verbose:gc -XX:+HeapDumpOnOutOfMemoryError -Xms512M
-Xmx1536M -jrockit -Xrs -XgcPrio:deterministic -XpauseTarget=50ms
-XX:+UseCallProfiling -XX:+UseAdaptiveFatSpin -XX:+ExitOnOutOfMemoryError
-XXnoSystemGC -XX:+UseFastTime


See Also:

Unwanted Software Installers

$
0
0

After all these years of software evolution, it is odd to see not much improvement in the area of software installation. Customers do not seem to mind dealing with different, complex installers. Nevertheless this whole process can be simplified to save time, effort and energy.

In an ideal world, a software installer is supposed to have just one function - copying the software bits to a designated location and nothing else. However today we interact with different installers that does variety of things -- some install the pre-compiled binaries, few come in ready-to-extract zip archives, few others compile the binary on-the-fly and install the binary. Most of the enterprise software installers configure the software as part of the installation process where as few installers install the software and simply quit leaving the configuration step for the experts. Some of the installers hard-code the hostname, IP address, absolute paths of certain files etc., into some of the files on target system, which makes it hard to re-use the software home directory on a different server. Few installers do sensible job by not tying anything to the host system where the software is being installed.

Here is my partial wish list of features for a software installer. I think it is enough to make a point.

Idempotent installations : install the software once, run anywhere. Customers should be able to move the resulting home directories from one host to any location on another host without worrying about the underlying changes to the location of the home directory, hostname, IP address etc., One example is the Oracle RDBMS installation. Once installed, the ORACLE_HOME can be zipped up, moved to another host, extracted and used right away. ORACLE_HOME usually contains the binaries. Installation specific configuration is stored outside of ORACLE_HOME. Optional Oracle Grid Control configuration appears to be saved under ORACLE_HOME, which is an aberration though it can be easily reconfigured once moved to another host.

Simplicity : providing the entire directory structure in an extractable compressed archive file will remove one or more layers of dependency that the software installer has. For example, some of the installers require Java run-time to show the graphical interface for the installer. I recently encountered an installer executable that has private/unsupported symbols statically linked to it. When those private interfaces were removed in a later version of the operating environment, installer crashed and failed to make any progress. Had the software been provided in an extractable archive, software would have been readily available in the latter case. It appears that Oracle Corporation is moving in the right direction by releasing WebLogic 12c software as a zip file.

De-couple software installation from configuration : there should be clear separation between the installation and configuration. Once the software is in place, relevant folks can always configure the software as directed and needed. The customer just needs a simple tool or script to configure the software.

        => Off-topic: providing a web interface is even better. It gives the flexibility to configure the software from anywhere in the same network.

Contain everything in a single top-level directory : it makes patching easier even if the top-level directory was moved to a different location or host. No point in spreading the pieces of software into multiple locations anyway. Going back to the example of ORACLE_HOME, one shortcoming in Oracle RDBMS installation is that few directories/files such as oraInventory reside outside of ORACLE_HOME - so, when moving ORACLE_HOME to another host, it is necessary to move all relevant files that are outside of ORACLE_HOME as well for successful database software patching.

With careful planning/design, Release Engineering can be as creative and innovative as the rest of the teams in delivering a software product out of the door --- but I guess it is up to the customers to demand that attitude.

PS:This blog post can be improved a lot. However since it is mostly about an opinion and a wish list, there is not much motivation to put more effort into it. And of course it is a generic discussion - nothing specific to a particular software or corporation.

Solaris Tip: How-To Identify Memory Mapped Files

$
0
0

A memory mapped (mmap'd) file is a shared memory object, or a file with some portion or the whole file was mapped to virtual memory segments in the address space of an OS process. Here is one way to figure out if a given object (file or shared memory object) was memory mapped in a process or not.

  1. find the file system inode number of the object
  2. look for that inode number in the address space of a given process

And here is an example. We are about to check a log file and a shared memory segment in a Siebel object manager's process address space.


# pfiles 8251
8251: siebmtshmw /siebel/siebsrvr/admin/Siebel81.isve02.s
..
1: S_IFREG mode:0744 dev:256,65539 ino:246660 uid:1234 gid:30 size:0
O_WRONLY|O_APPEND|O_CREAT
/siebel/siebsrvr/enterprises/Siebel81/isve02/log/StdErrOut/stderrout_8251_23311913.log
...
9: S_IFREG mode:0700 dev:256,65539 ino:246640 uid:1234 gid:30 size:6889472
O_RDWR|O_CREAT|O_EXCL
/siebel/siebsrvr/admin/Siebel81.isve02.shm
..

# pmap -sx 8251 | grep 246660
# <== stderrout_8251_23311913.log file was not a memory mapped file

# pmap -sx 8251 | grep 246640
F6400000 64 64 - - 8K r--s- dev:256,65539 ino:246640
F6410000 136 136 - - - r--s- dev:256,65539 ino:246640
F6432000 128 128 - - 8K r--s- dev:256,65539 ino:246640
...
<== Siebel81.isve02.shm was a memory mapped object

Oracle RDBMS & Solaris : Few Random Tips (Feb 2012)

$
0
0

These tips are just some quick solutions or workarounds. Use these quickies at your own risk.

[#1] Oracle Data Pump

Q: How to exclude the table definition while importing a table using Oracle Data Pump import utility?

A: Use EXCLUDE=TABLE/TABLE option.

eg.,

impdp login/password DUMPFILE=<DUMP_FILENAME> LOGFILE=<LOGFILE_NAME> \
DIRECTORY=<DB_DIR_NAME> TABLES=<TABLE_NAME> EXCLUDE=TABLE/TABLE



[#2] Workaround to ORA-01089: immediate shutdown in progress - no operations are permitted

When the database is in the middle of an instance shutdown, if another shutdown or startup was attempted, Oracle RDBMS may throw the above ORA-01089 error. The workaround is to force Oracle to start the database instance using startup force option. This option will shutdown the database instance (if running) using the abort command and then starts it up.

eg.,

SQL> STARTUP FORCE



[#3] Quick steps to upgrade the Oracle database from version 11.2.0.[1 or 2] to 11.2.0.3

Execute the following in the same sequence as sysdba.


startup upgrade
!cd $ORACLE_HOME/rdbms/admin
@utlu112i.sql /* pre-upgrade information tool */
exec dbms_stats.gather_dictionary_stats (DEGREE => 64);
@catupgrd.sql /* create/modify data dictionary tables */
@utlu112s /* all components should be in VALID state */
shutdown immediate
startup
@catuppst.sql /* upgrade actions that do not require DB in UPGRADE mode */
@utlrp.sql /* recompile stored PL/SQL and Java code */
SELECT count(*) FROM dba_invalid_objects;
/* verify that all packages and classes are valid */
exit



[#4] Q: Solaris: how to get rid of zombie processes?

A: Run the following with appropriate user privileges.


ps -eaf | grep defunct | grep -v grep | preap `awk '{ print $2 }'`

Alternative way: (not as good as the previous one - still may work as expected)


prstat -n 500 1 1 | grep zombie | preap `awk '{ print $1 }'`



[Added on 03/01/2012]

[#5] Solaris: Many TCP listen drops

eg.,


# netstat -sP tcp | grep tcpListenDrop
tcpListenDrop =2442553 tcpListenDropQ0 = 0

To alleviate numerous TCP listen drops, bump up the value for the tunable tcp_conn_req_max_q


# ndd -set /dev/tcp tcp_conn_req_max_q <value>



[Added on 03/02/2012]

[#6] Solaris ZFS: listing all properties and values for a zpool

Run: zfs get all <zpool_name> as any OS user

eg.,


% zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 276G 167G 109G 60% ONLINE -
spec 556G 168G 388G 30% ONLINE -

% zfs get all rpool
NAME PROPERTY VALUE SOURCE
rpool type filesystem -
rpool creation Fri May 27 17:06 2011 -
...
rpool compressratio 1.00x -
rpool mounted yes -
rpool quota none default
rpool reservation none default
rpool recordsize 128K default
...
rpool checksum on default
rpool compression off default
...
rpool logbias latency default
rpool sync standard default
rpool rstchown on default



[#7] Solaris: listing all ZFS tunables

Run: echo "::zfs_params" | mdb -k with root/super-user privileges

eg.,


# echo "::zfs_params" | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x10000000
zfs_arc_min = 0x10000000
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
..
..
zio_injection_enabled = 0x0
zvol_immediate_write_sz = 0x8000

Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

$
0
0

Here is a failure sample.


SQL> set serveroutput on
SQL> alter package APPS.FND_TRACE compile body;

Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY APPS.FND_TRACE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
235/6 PL/SQL: Statement ignored
235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared
..

By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user.

eg.,


SQL>CREATE PUBLIC SYNONYM dbms_system FOR dbms_system;

Synonym created.

SQL> GRANT EXECUTE ON dbms_system TO APPS;

Grant succeeded.

- OR -

SQL>GRANT EXECUTE ON dbms_system TO PUBLIC;

Grant succeeded.

SQL> alter package APPS.FND_TRACE compile body;

Package body altered.

Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

Solaris Volume Manager (SVM) on Solaris 11

$
0
0

SVM is not installed on Solaris 11 by default.

# metadb
-bash: metadb: command not found

# /usr/sbin/metadb
-bash: /usr/sbin/metadb: No such file or directory

Install it using pkg utility.

# pkg info svm
pkg: info: no packages matching the following patterns you specified are
installed on the system. Try specifying -r to query remotely:

svm

# pkg info -r svm
Name: storage/svm
Summary: Solaris Volume Manager
Description: Solaris Volume Manager commands
Category: System/Core
State: Not installed
Publisher: solaris
Version: 0.5.11
Build Release: 5.11
Branch: 0.175.0.0.0.2.1
Packaging Date: October 19, 2011 06:42:14 AM
Size: 3.48 MB
FMRI: pkg://solaris/storage/svm@0.5.11,5.11-0.175.0.0.0.2.1:20111019T064214Z

# pkg install storage/svm
Packages to install: 1
Create boot environment: No
Create backup boot environment: Yes
Services to change: 1

DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 104/104 1.6/1.6

PHASE ACTIONS
Install Phase 168/168

PHASE ITEMS
Package State Update Phase 1/1
Image State Update Phase 2/2

# which metadb
/usr/sbin/metadb

This time metadb may fail with a different error.

# metadb
metadb: <HOST>: /dev/md/admin: No such file or directory

Check if md.conf exists.

# ls -l  /kernel/drv/md.conf 
-rw-r--r-- 1 root sys 295 Apr 26 15:07 /kernel/drv/md.conf

Dynamically re-scan md.conf so the device tree gets updated.

# update_drv -f md

# ls -l /dev/md/admin
lrwxrwxrwx 1 root root 31 Apr 20 10:12 /dev/md/admin -> ../../devices/pseudo/md@0:admin

# metadb
metadb: <HOST>: there are no existing databases

Now Solaris Volume Manager is ready to use.

eg.,
# metadb -f -a c0t5000CCA00A5A7878d0s0

# metadb
flags first blk block count
a u 16 8192 /dev/dsk/c0t5000CCA00A5A7878d0s0

OBIEE 11g: Resolving Presentation Services Startup Failure

$
0
0
ISSUE:

Starting Presentation Services fail with the error:

[OBIPS] [ERROR:1] [] [saw.security.odbcuserpopulationimpl.getbisystemconnection] [ecid: ] [tid: ] Authentication Failure.
Odbc driver returned an error (SQLDriverConnectW).
State: 08004. Code: 10018. [NQODBC] [SQL_STATE: 08004] [nQSError: 10018] Access for the requested connection is refused.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43126] Authentication failed: invalid user/password. (08004)[[

Also connecting to the metadata repository (RPD) in online mode fails with similar error.

Looking through the BI server log, nqserver.log, you may find an error message similar to the following:

[OracleBIServerComponent] [ERROR:1] [] [] [ecid: 0001J1LfUetFCC3LVml3ic0000pp000000] [tid: 1] 
[13026] Error in getting roles from BI Security Service:
'Error Message From BI Security Service: [nQSError: 46164] HTTP Server returned 404 (Not Found) for URL .' ^M

RESOLUTION:

  • Connect to WebLogic Server (WLS) Console -> Deployments. Ensure that all deployed components are in 'Active' state.

  • If any of the components is in 'Prepared' state, select that application and then click on "start servicing all requests"

  • Restart BI Server and Presentation Services

In some cases, the following additional step might be needed to resolve the issue.

  • Access the Enterprise Manager Fusion Middleware control: http://<host.domain>:port/em

  • Navigate to Business Intelligence -> coreapplication

  • 'Capacity Management' tab -> 'Scalability' sub-tab

  • Click on 'Lock and Edit Configuration' button

  • Enter the IP address in the 'Listen Address' field

  • Click on 'Activate Changes' followed by 'Release Configuration' buttons

  • Restart BI Server and Presentation Services

Also check these My Oracle Support (MOS) documents for more clues and information.

1387283.1 Authentication failed: invalid user/password
1251364.1 Error: "[nQSError: 10018] Access .. Refused. [nQSError: 43126] Authentication Failed .." when Installing OBIEE 11g
1410233.1.1 How To Bind Components / Ports To A Specific IP Address On Multiple Network Interface (NIC) Machines

Oracle E-Business Suite Tip : SQL Tracing

$
0
0
Issue:

Attempts to enable SQL tracing from concurrent request form fails with error:
Function not available to this responsibility. Change Responsibilities or contact your System Administrator

Resolution:

Switch responsibility to "System Administrator". Navigate to System -> Profiles, and query for "%Diagnostics% ("Utilities : Diagnostics")". Once found the profile, change its value to "Yes". Restart web browser and try enabling SQL trace again.

Session Sharing with another User on *NIX and Windows

$
0
0
Oracle Solaris

Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example.

eg.,
% finger
Login Name TTY Idle When Where
root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a
sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com

In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command.

script -a /dev/null | tee -a <target_terminal>

eg.,
# script -a /dev/null | tee -a /dev/pts/2
Script started, file is /dev/null
#
# uptime
5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81
#
# isainfo -v
64-bit sparcv9 applications
crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi
des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc
32-bit sparc applications
crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi
des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32
#
# exit
Script done, file is /dev/null

After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges.

The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal.

Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this.

[Thanks to Sujeet for sharing this tip]

Microsoft Windows

Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps.

  • Connect to the target Windows system using Remote Desktop Connection client
  • Launch Windows Task Manager
  • Navigate to the "Users" tab
  • Find the user session that you want to connect to and have full control over as the other user who is currently holding that session
  • Select the user name in Windows Task Manager, right click and choose the option "Remote Control"
  • A window pops up on the other user's session with the message "<USER> is requesting to control your session remotely. Do you accept the request?"

Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

[OID] ldap_modify: Failed to find member in mandatory or optional attribute list

$
0
0
A sample LDAP entry and the resulting error message are shown below. The objective is simple - adding a new member (employee) to an existing group (Administrators).

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: member
member: cn=emp1234,ou=people,ou=entapp

% ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add member:
cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
ldap_modify: Object class violation
ldap_modify: additional info: Failed to find member in mandatory or \
optional attribute list.


The above error message is a generic one. It would have been nice had it shown the expected and actual inputs as part of the error. However it gave us a hint that the object class was violated. In this example, the group "Administrators" was created under object class groupOfUniqueNames.

% ldapsearch -p 3060 -h localhost -b "ou=groups,ou=entapp" -A "(objectclass=*)"
..
cn=Administrators,ou=groups,ou=entapp
Administrators,groups,entapp
cn
uniquemember
objectclass
..

RFC 4519 for Lightweight Directory Access Protocol (LDAP) requires the uniqueMember attribute within the groupOfUniqueNames object class. An excerpt from the original RFC:

3.6.  'groupOfUniqueNames'
...

( 2.5.6.17 NAME 'groupOfUniqueNames'
SUP top
STRUCTURAL
MUST ( uniqueMember $
cn )

MAY ( businessCategory $
seeAlso $
owner $
ou $
o $
description ) )

Going back to the issue in hand, the "add" attribute must be uniqueMember, not member, in "modify" LDAP entry. That's the object class violation in this case. Now the fix to the issue is obvious.

The modified entry and the output from Oracle Internet Directory's ldapmodify command are shown below.

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: uniqueMember
uniqueMember: cn=emp1234,ou=people,ou=entapp

$ ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add uniqueMember:
cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
modify complete

Though the above example was derived from an Oracle Internet Directory (OID) environment, the problem and the solution are applicable to all environments running LDAP servers.

Enabling 2 GB Large Pages on Solaris 10

$
0
0
Few facts:
  • - 8 KB is the default page size on Solaris 10 and 11 as of this writing
  • - both hardware and software must have support for 2 GB large pages
  • - SPARC T4 hardware is capabile of supporting 2 GB pages
  • - Solaris 11 kernel has in-built support for 2 GB pages
  • - Solaris 10 has no default support for 2 GB pages
  • - Memory intensive 64-bit applications may benefit the most from using 2 GB pages

Prerequisites:

OS: Solaris 10 8/11 (Update 10) or later
Hardware: SPARC T4. eg., SPARC T4-1, T4-2 or T4-4

Steps to enable 2 GB large pages on Solaris 10:

  1. Install the latest kernel patch or ensure that 147440-04 or later was installed

  2. Add the following line to /etc/system and reboot
    • set max_uheap_lpsize=0x80000000

  3. Finally check the output of the following command when the system is back online
    • pagesize -a

    eg.,
    % pagesize -a
    8192 <-- 8K
    65536 <-- 64K
    4194304 <-- 4M
    268435456 <-- 256M
    2147483648 <-- 2G

    % uname -a
    SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v
Also See:

E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

$
0
0

Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view.

THREADS parameter

The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention.

eg.,

Set a value for THREADS parameter as shown below.


CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'THREADS';

COMMIT;

(I am not aware of any maximum value for THREADS parameter)


CHUNK_SIZE parameter

The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time.

When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted.

It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found.

eg.,

Set a value for CHUNK_SIZE parameter as shown below.


CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'CHUNK_SIZE';

COMMIT;

CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000.


CHUNK SHUFFLE parameter

By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example).

It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured.

eg.,

Set chunk shuffling as shown below.


CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = 'Y'
WHERE PARAMETER_NAME = 'CHUNK SHUFFLE';

COMMIT;

Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.
    My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks

Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

$
0
0

An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.

    The Oracle Optimized Solution for Oracle E-Business Suite

This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS.

What is an Optimized Solution?

Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations.

Oracle E-Business Suite R12 Use Case

Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available.

Oracle Solaris Cluster is responsible for the high availability portion of the solution.

Performance Data

For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions.

Questions & Requests

Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

emca fails with "Database instance is unavailable" though available

$
0
0

The following example shows the symptoms of failure, and the exact error message.


$ emca -repos create

...
Password for SYSMAN user:

Do you wish to continue? [yes(Y)/no(N)]: Y
Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks checkDbAvailabilityImpl
WARNING: ORA-01034: ORACLE not available

Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks throwDBUnavailableException
SEVERE:
Database instance is unavailable. Fix the ORA error thrown and run EM Configuration Assistant again.

Some of the possible reasons may be :

1) Database may not be up.
2) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset ORACLE_HOME and bounce the database.

For eg. Database is started setting environment variable ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db and bounce the database.

Fix:

Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file.

Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

Solaris Tips : CPU Cache Sizes, Changing System Date

$
0
0

Tip #1: Finding the CPU cache sizes from Solaris operating environment

Use the prtpicl utility to list out system configuration, and look for the cache sizes within that output.

eg.,


$ prtpicl -v |grep cache
:l1-icache-size 0x10000
:l1-icache-line-size 0x40
:l1-icache-associativity 0x2
:l1-dcache-size 0x10000
:l1-dcache-line-size 0x40
:l1-dcache-associativity 0x2
:l2-cache-size 0x500000
:l2-cache-line-size 0x100
:l2-cache-associativity 0xa

[Updated 01/14/13] The above output was gathered from an M4000 system that has SPARC64 VII processors.

Recent update releases of Solaris 10 and 11 show the prtpicl reported cache sizes in decimal numbers.

Here is a slightly improved prtpicl command that filters out unwanted output. (Courtesy: Georg)

/usr/sbin/prtpicl -v -c cpu | egrep "^ +cpu|ID|cache"

Tip #2: Changing the System Date

Use date to change the system date. For example, to set the system date to March 9, 2008 08:15 AM, run the following command. Syntax: date mmddHHMMyy


#date 0309081508

Sun Mar 9 08:15:03 PST 2008
Viewing all 115 articles
Browse latest View live