I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days. I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:
It finds and deletes all files in directory /interface/inbound that are older than 60 days. After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:
All csv files in /interface/inbound that are older than 60 days will be deleted. But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert... I tried several things, like add another "-name" to the find command:
But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course). After struggling a liitle with the find command, I managed to make it works:
:-) Aviad |
Tuesday, June 9, 2009
Purge old files on Linux/Unix using “find” command
Posted by Aviad at 11:30 PM 4 comments
Labels: Unix\Linux
Wednesday, May 20, 2009
Upgrade Java plug-in (JRE) to the latest certified version
If you have already migrated to Java JRE with Oracle EBS 11i you may want to update EBS to the latest update from time to time. For example, if your EBS environment is configured to work with Java JRE 6 update 5 and you want to upgrade your clients with the latest JRE 6 update 13. This upgrade process is very simple:
That's all.... Since we upgraded our system to JRE 6 update 13 (2 weeks ago), our users don't complain about mouse focus issues and some other forms freezes they have experienced before. So... it was worth it... If you haven't migrated from Jinitiator to the native Sun Java plug-in yet, it's highly recommended to migrate soon. Jinitiator is going to be desupported soon. See the following post for detailed, step by step, migration instructions: Upgrade from Jinitiator 1.3 to Java Plugin 1.6.0.x. You are welcome to leave a comment. Aviad |
Posted by Aviad at 1:15 AM 3 comments
Labels: Developer 6i, Upgrades
Tuesday, March 17, 2009
Corruption in redo log file when implementing Physical Standby
Lately I started implementing Data Guard - Physical Standby - as a DRP environment for our production E-Businsess Suite database and I must share with you one issue I encountered during implementation. I chose one of our test environments as a primary instance and I used a new server, which was prepared to the standby database in production, as the server for the standby database in test. Both are Red-Hat enterprise linux 4. The implementation process went fast with no special issues (at lease I thought so...), everything seems to work fine, archived logs were transmitted from the primary server to the standby server and successfully applied on the standby database. I even executed switchover to the standby server (both database and application tier), and switchover back to the primary server with no problems. The standby database was configured for maximum performance mode, I also created standby redo log files and LGWR was set to asynchronous (ASYNC) network transmission. The exact setting from init.ora file:
At this stage, when the major part of the implementation had been done, I found some time to deal with some other issues, like interfaces to other systems, scripts, configure rsync for concurrent log files, etc... , and some modifications to the setup document I wrote during implementation. While doing those other issues, I left the physical standby instance active so archive log files are transmitted and applied on the standby instance. After a couple of hours I noticed the following error in the primary database alert log file:
I don't remember if I've ever had a corruption in redo log file before... The primary instance resides on a Netapp volume, so I checked the mount option in /etc/fstab but they were fine. I asked our infrastructure team to check if something went wrong with the network during the time I got the corruption, but they reported that there was no error or something unusual. Ok, I had no choice but to reconstruct the physical standby database, since when an archive log file is missing, the standby database is out of sync'. I set the 'log_archive_dest_state_2' to defer so no further archive log will be transferred to the standby server, cleared the corrupted redo log files (alter database clear unarchived logfile 'logfile.log') and reconstruct the physical standby database. Meanwhile (copy database files takes long...), I checked documentation again, maybe I missed something, maybe I configured something wrong.. I have read a lot and didn't find anything that can shed some light on this issue. At this stage, the standby was up and ready. First, I held up the redo transport service (log_archive_dest_state_2='defer') to see if I'll get a corruption when standby is off. After one or two days with no corruption I activated the standby. Then I saw the following sentence in Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2): One moment, I thought to myself, the standby server is based on AMD processors and the primary server is based on Intel's.. Is it the problem?! Meanwhile, I got a corruption in redo log file again which assured there is a real problem and it wasn't accidentally. So I used another AMD based server (identical to the standby server) and started all over again – primary and standby instances. After two or three days with no corruption I started to believe the difference in the processors was the problem. But one day later I got a corruption again (Oh no…) I must say that on the one hand I was very frustrated, but on the other hand it was a relief to know it's not the difference in the processors. So it is not the processors, not the OS and not the network. What else can it be?! And here my familiarity with the "filesystemio_option" initialization parameter begins (thanks to Oracle Support!). I don't know how I missed this note before, but all is written here - Note 437005.1: Redo Log Corruption While Using Netapps Filesystem With Default Setting of Filesystemio_options Parameter. When the redo log files are on a netapp volume, "filesystemio_options" must be set to "directio" (or "setall"). When "filesystemio_options" is set to "none" (like my instance before), read/writes to the redo log files are using the OS buffer cache. Since netapp storage is based on NFS (which is stateless protocol), when performing asynchronous writing over the network, the consistency of writes is not guaranteed. Some writes can be lost. By setting the "filesystemio_options" to "directio", writes bypasses the OS cache layer so no write will be lost. Needless to say that when I set it to "directio" everything was fine and I haven't gotten any corruption again. Aviad |
Posted by Aviad at 8:54 AM 0 comments
Labels: Data Guard, Network, Troubleshooting, Unix\Linux
Tuesday, March 10, 2009
JRE Plug-in “Next-Generation” – Part II
In my last post "JRE Plug-in “Next-Generation” – to migrate or not?" I wrote about a Forms launching issue in EBS right after upgrading JRE (Java Plug-in) to version 6 update 11 which works with the new next-generation Java Plug-in architecture. The problem happens inconsistently and it only works when I disable the "next-generation Java Plug-in". Following a SR I've opened to Oracle support about this issue, I was being asked to verify that the profile option "Self Service Personal Home Page Mode" is set to "Framework Only". We have this profile option set to "Personal Home Page" as our users prefer this way to the "Framework Only" way. It's important to note that "Personal Home Page" is not a supported value for the "Self Service Personal Home Page Mode" profile option and may cause unexpected issues. After setting the profile option to "Framework Only" the problem has resolved and the screen doesn't freezes anymore. So the solution in my case was to set the profile option "Self Service Personal Home Page Mode" to "Framework Only" (we are still testing it but it look fine so far), however there are two more options that seems to work, even when the profile option set to "Personal Home Page" and "next generation Java Plug-in" is enabled. 1) Uncheck "Keep temporary files on my computer" I'm not sure how or why, but it solves the problem, no more freezing this way.. 2) Set “splashScreen” to null - No need to bounce any service. Again, it's not so clear how or why, but it solves the problem as well. Now, we just need to convince our users to accept the "framework only" look and feel, and then we would consider upgrading all our clients to the new next-generation Java Plug-in. You are welcome to leave a comment or share your experience with the new Java Plug-in. Aviad |
Posted by Aviad at 2:22 AM 0 comments
Labels: Developer 6i, Troubleshooting, Upgrades
Wednesday, February 18, 2009
JRE Plug-in “Next-Generation” – to migrate or not?
It has been more than half a year since we've migrated from Oracle Jinitiator to Sun Java JRE Plug-in (Java 6 update 5) in our Oracle Applications (EBS) system, and I must say, I'm not satisfied yet. For the first months we had been struggling with a lot of mouse focus bugs which have made our users very angry about this upgrade. Although we've applied some patches related to this bugs, we still have some with no resolution. As part of an SR we had opened about mouse focus issue, we was advised by Oracle to install the latest Java JRE (Java 6 update 12 this days) as a possible solution for the remaining bugs. Starting with Java 6 update 10, Sun has introduced the new "next-generation Java Plug-in", which makes troubles with Oracle EBS. You can read more about this new architecture at Sun Java site - "What is next-generation Java Plug-in". Right after installing Java 6 update 11, I encountered a problem - when trying to open forms the screen freezes. There is an unpublished opened bug for this problem: Bug 7875493 - "Application freezes intermittently when using JRE 6U10 and later". I've been told by Oracle support that they have some incompatibilities with the new next-generation architecture and that they are working with Sun about it. Meanwhile there are 2 workarounds: (the second doesn't work for me but suggested by Oracle support) 1) Disable the "next generation Java Plug-in" option: 2) Set the swap file to system managed + Tune the heap size for java: - Go to Control Panel -> Java -> Select the "Java" tab -> Click "View..." (in Java Applet Runtime Settings frame) -> update the "Java Runtime Parameters" field with: "-Xmx128m -Xms64m". This workaround doesn't work for me. For now, I've decided to stay with the "old" Java Plug-in 6 update 5 and do not upgrade our users to the new next-generation Java Plug-in. I Hope the following updates of Java Plug-in will be better or Oracle will publish a patch to solve this problem. I’ll keep update as soon as I have more info’. Aviad |
Posted by Aviad at 2:42 AM 2 comments
Labels: Developer 6i, Troubleshooting, Upgrades
Thursday, January 29, 2009
How to enable trace for a CRM session
Posted by Aviad at 7:30 AM 2 comments
Labels: Sql scripts, Troubleshooting
Monday, December 22, 2008
Oracle Database Resource Manager 11g - Undocumented New Parameters
I've played around with Oracle Database Resource Manager in 10g and it's quite nice and might be very useful for high CPU usage systems, but I found the inability to limit I/O as a drawback since in most cases I've faced the need to limit I/O is more necessary than CPU limit. When you have, let's say, 8 CPU's on your machine, you need all the 8 to be 100% utilized by Oracle sessions for the resource manager start limit sessions. However, if your machine I/O capabilities are 50 mbps, you need only one or two sessions which perform intensive I/O (batch job/heavy report) to make the database very heavy. In Oracle Database 11g Release 1, Resource Manager has gotten some new features related to I/O. So I've installed the 11g, made some tests and found some interesting issues. I'm not going to write about Resource Manager basics or about 11g enhancements as some great articles have already been published about it. For example, you can read Tim's blog post - "Resource Manager Enhancements in Oracle Database 11g Release 1" But... I'm going to discuss one missing capability (in my opinion) that will hopefully be available with Oracle Database 11g Release 2 with 2 new parameters which have already available but inactive and undocumented. For those who are not familiar with Oracle Database Resource Manager I'll try to give a short basic introduction: Oracle Database Resource Manager helps us to prioritize sessions to optimize resource allocation within our database by:
Only one Resource Plan is active at a time. When Oracle Database 11g was introduced, some new features for Resource Manager related to I/O have been revealed. Among them:
Oracle have added the option to "capture" Oracle sessions by the I/O requests or by the megabytes of I/O they issued in order to move them to a lower priority consumer group. I have a very fundamental doubt about this enhancements as I don't get the meaning of "capturing" an intensive I/O session and move it to a low priority consumer group which can have only CPU limit ... ?! The reason we "capture" this session is the amount of I/O it makes, and when we move it to a low priority consumer group we can just limit its CPU resources. We can't limit the amount of I/O for a Consumer Group. It could have been very useful if Oracle had added the ability to limit I/O for a Consumer Group, like we can limit CPU (with mgmt_pN) for a Consumer Group. What is missing here is the ability to limit I/O for a specific Consumer Group in terms of Maximum I/O per second or Maximum megabytes per second. Will Oracle enhance Resource Manager in 11g Release 2 to fulfill this capability? I don't have a confident answer for this question but I assume they will. While playing around I've noticed two new parameters to the CREATE_PLAN procedure - MAX_IOPS and MAX_MBPS. On first sight it looked like the answer to my question - the ability to limit I/O for session within a plan, but it's not... Those two parameter are undocumented and totally ignored in Oracle 11g Release 1 Documentation but available in 11g Release 1 database:
I tried to create a new plan using one of these two parameters, but it returned an error for each value I tried.
I've confirmed it with Oracle support and their answer was: "This is currently an expected behaviour, we can not explicitly set either max_iops or max_mbps to any value other than null, that's why these parameters are currently not included in the documentation." So here is my guess: I'll keep track of it as soon as Oracle Database 11g Release 2 will be announced and I'll update. You are welcome to leave a comment and/or share your opinion about this topic. Aviad |
Posted by Aviad at 7:31 AM 4 comments
Labels: Performance