Monday, December 13, 2010

Another crapplication

It never ceases to amaze me how software that sells for five figures or more can contain so much shit code. My most recent example: I noticed a CPU hogging process running on our Oracle server and was astonished at two things. First thing was a SELECT * in a static query containing eight inner joins. The second item was that same statement (and over 3000 almost identical statements) was not using a bind variable in the WHERE clause for the primary key (a VARCHAR2(254) containing a GUID).

There is no excuse for production code containing a SELECT * from any number of tables. What made this example even more inexcusable was the table contained a BLOB and judging by the pattern of other SQL statements these queries were pulling back a description and last updated by date.

There is only one excuse that I know of to NOT use bind variables in prepared statements. To be honest, I do not know if this is one hundred percent true of Oracle or other databases, but if the query is selecting by status or other column where the difference between the counts of the values is extreme then it would be best to not use a bind variable on that status. If a table contains a billion orders where status is filled and a couple hundred with a status of cancelled and the query is selecting only by status then using a bind variable might not provide an optimal plan for both situations (it might use a full table scan on filled and index for cancelled). In the above example, the software was creating, parsing, opening, fetching then closing the same logic SQL thousands of times when it should have created, parsed once then bound and executed (reused) thousands of times. It was a single row select by primary key (a GUID even) and that should always be done using a bind variable.

The developers might not be at fault here. The above scenario reeks of poor, generated, object oriented code. Get the primary keys for all of your data objects then read them one by one to display on some dumb-ass web page. More likely: read incoming XML data stream into business logic tier then instantiate all the needed business and data turds then have the business turds use the data turds to retrieve data using the exact opposite of a single, bound, efficient SQL statement, then convert the results of all the turds to XML so the rendering engine can puke it out to a web browser, iDroid, dumb phone, really dumb phone, tablet, pad, tampon, car dashboard, HAM radio or intelligent kitchen appliance.

What frustrates me is that I am seeing this crap in a pricey application. I would be tolerant it were coming from some schmuck trying to write an application to pay his (or her, no wait, probably his) way through college. This application has numerous customers, all of which are suffering with this crapplication and the unlucky system administrators are stuck between the bean counters and the vendor when attempting to tune the piece of weasel poop.

A good Ask Tom article

Tuesday, November 23, 2010

HTC EVO the first month

I have had my HTC EVO for over a month now. The experience has been good so far.

I have a couple of nitpicks. The message indicator light should flash while the device is connected to a charger or USB. SMS messages and mail messages should have the option of repeating the notification tone until responded to (like a pager).

Battery life is poor but what can anyone expect. Does anyone really believe they will be watching HD video over 4G without sucking that wafer-thin battery dry in less than an hour? If I rarely use the device the battery will be around 53 percent charged by the end of the day; with no Wi-Fi, 4G, Bluetooth or GPS services active. The wife uses Pandora all day at work so she leaves the phone attached via USB all day. Anyone texting, talking and Facebarfing and Twitching all day will likely not have a charge at the end of the day.

The 4G service is now available around my house out in the middle of nowhere but does not appear to be available in downtown Columbus where I work (strange). When I ran speed tests using 4G I was getting very mixed results depending upon where I was and what destination I was hitting. The down speed was comparable with Wi-Fi but the up-speed was much faster about half the time. The whole 4G experience is going to need a year or two before it is widely usable and stable and firmware for phones with that service is stable.

I have also paid my first bill. The initial shock took a few beers to get through.

Tuesday, November 16, 2010

Bye to Schelotto, Hejduk, more

It is definitely a sad day for the Crew organization but in the long run it is just a day. We all knew that good guys like Oughton and Hejduk that have put a face on this team would no longer be able to contribute toward the common goal of the organization and most fans; hardware. Legends like Schelotto that brought class and talent would eventually either move on or be pushed out. Even Padula, the pirate at left back, who just a season or two ago was considered the key component of the Crew championship run, would fade away.

I think what makes today really hard to deal with is the sudden harshness of the events. Our season was over in an instant with a missed penalty and most of the heart, soul and leadership is likely gone just as abruptly. As the shock of the playoff loss set in maybe only a handful of people realized that would be the last time they would see those four players in a Crew uniform; it sure did not for me.

As mentioned in my previous blog post, I was hoping Schelotto was going to be retained one more season to help teach and shape the attacking midfield and his successor. That is what he should have been doing this 2010 season; getting his replacement ready and perhaps even using his last season as a rallying point to will the rest of the team back into the finals. Instead it is an abrupt end that will leave a bitter taste.

It is easy to feel like twice shat feces about this but it is now up to the Crew organization to deliver. They want to go young, fine. Fans have survived youth movements with each coach before Bob. 

The most important priority now is to get that skilled player and face for the Crew. My wife knows little about soccer but she knew who Schelotto and Hejduk (the surfer dude with the long hair, no!) were and more importantly what they brought to the team: he was their best player and he was one of the favorite players (big fan favorite, remember the pictures of him in the north end). 

We still need a finisher up top. I like Renteria but seriously, it is about time we invested a little more at the position beyond what we have done before.

I also am willing to hold off on judgment until we get some results. It will be months and a couple of drafts before the Crew will know where they need to throw their money. That is going to be a long, suck-ass couple of months, too. We were fortunate to have Schelotto and Hejduk while they were here and should be thankful for it. It is now up to the Crew to find the next group of players for us to be thankful for.

Also, just read on the forums: Barley's Christmas Ale Guest Tapper: Columbus Crew captain Frankie Hejduk

Thursday, November 11, 2010

Custom EVO wallpapers

There appears to be a lot of confusion and mixed information on the best way to do wallpapers for the HTC EVO mobile device. There are about four resolutions and a couple of techniques in various forums. I did some of my own research and this is what I found. I assume you are using a stock device and have not removed the base UI.

The EVO screen is 480 x 800 (pixels; all numbers are in pixels).

There are seven homes per scene; the middle and three to the left and to the right.

The best resolution for incoming images is 960 x 800.

When choosing wallpaper, expand the green boundary box to use the entire image. This results in the least amount of distortion of the final image displayed on the EVO homes.

The top 38 pixels will never be seen; they are covered by the status bar. If you have an embarrassing zit on your forehead or do not want to folks to know you are a NASCAR fan then make sure your acne and logos are in the top 38 pixels of the wallpaper.

The leftmost and rightmost 48 pixels will never be seen. On the left side, pixels 1-48 are hidden, 49-59 can be seen momentarily when swiping right when on the leftmost home, and pixel 60 bleeds onto the leftmost home. On the right side it is the same just in reverse; but the last pixel does not bleed into the rightmost home.

The bottom 56 to 70 pixels are usually hidden by the lock/menu bar except when you pinch and then select a home and when the device is locked. So food stains on your shirt, nipples and muffin tops would go in that area.

The extended homes do not have their own separate wallpaper. The center home is in the middle 480 x 800 pixels of your wallpaper. When you swipe in a direction to get to one of the extended homes the wallpaper is shifted 60 pixels in that direction (240 pixels on each extended side, 60 of those are not visible). 

So start with a 840 x 762 image assuming that the bottom 60 or so pixels will be seen when the device is locked but usually will be hidden. Add borders (solid black is fine) to the top, left and right to make the image 960 x 800. Send it to your device using whatever method you choose and then enjoy.

Of course, someone from HTC will come along and probably debunk all of this but it works for me.

Monday, November 08, 2010

The Columbus Crew 2010, summary

I find myself in a similar situation. Last year on this day I wrote about the premature Columbus Crew exit from the MLS playoffs Thursday to Real Salt Lake. This season the same thing has happened, just at the hands of the Colorado Rapids. So when I look back at that 2009 season post what has changed and what has stayed the same?

I still hate the MLS playoff system and feel it gives the advantage to the lesser record team because the lesser team is likely to use the home field advantage to get more goals in the first game then play a quote boring defensive system assuming the top seeded team must score win by one more goal than the deficit.

There were positive spots to the 2010 season. The Crew made it to the playoffs (falling to Colorado on a missed penalty kick), to the finals of the Lamar Hunt U.S. Open Cup (falling to Seattle), and are still in the CONCACAF Champions Cup/Champions League.

Sagging attendance would probably lead the negative points when the league and cup results are removed. The Crew did not find a replacement for Schelotto (that we know of). Some players that should have been relegated to the bench this season are still signed through next season.

I also think it still boils down to one word: inconsistency. First there was the inconsistent lineup due to injuries to key players like Rogers, O'Rourke, Hesmer and Renteria. Second, inconsistent performances from Rogers, Hejduk, Hesmer, Padula, and Schelotto in some ways. Forwards would be hot one game, go cold, go to the bench for another forward, that forward would be hot, etc. Third, inconsistent coaching. Robert Warzycha seemed to get out-coached near the end of the season (again) and many have questioned his line-up choices.

So (again) where does the Crew go from here?

Warzycha is not the immediate problem. If Kreis and Nowak can coach and get results then Warzycha can. The Crew organization as a whole is degrading before our eyes from the top down. The Crew need a shot in the arm from somewhere whether it be an exciting new player or some new feature of the team or something marketable.

I will probably get laughed at for this but I think we should find some way to get Jon Busch back as backup keeper. Jon is one of those franchise players you would like to always have on your team in some way shape or form. Bring him back to where he started. Gruenebaum would probably go.

In the defense we need to find a replacement for Hejduk in a starting role. Francis can start for Padula if we keep him. I am guessing Iro or Brunner will go to expansion leaving O'Rourke in the mix with Marshall.

Gaven and Carroll are the only midfield players I would keep. Moffat had sub-adequate performances (again, inconsistency). Ekpo and Burns are good players. Rogers is a huge question mark but I would love to see him get a solid season with Schelotto. Oughton and can be kept with the team as long as it is feasible to do so. Schelotto should be retained for one more season but the Crew need to find his replacement this season.

The forward spots need overhauled. Garey will likely go to expansion. Renteria should start and if Mendoza can put in a consistent performance like the one at the home playoff match then he should be in the mix as well. Lenhart is a bench player. We could still use a quality striker but I do not see us getting one, ever (pessimistic but realistic).

If I were asked to name the priorities for the 2011 season they would be: retain Schelotto for one more season as a player, acquire Schelotto's replacement so he can work alongside Guillermo and develop to lead the team. The next thing the Crew needs is a forward capable of playing with Schelotto (and his successor) and scoring a bucket load of goals. Warzycha should be retained as coach unless an obvious and adequate replacement is both available and feasible for the organization. Last, the Crew need a shot in the arm in the form of something exciting in the organization or a charismatic player to get some buzz generated about the team.

Tuesday, October 19, 2010

Okay, I'm one of the sheep now

I purchased a HTC EVO smart phone and service plan. I feel, strange...

It is a really nice device. Calling this and all other new generation of devices a phone is really stupid. A phone is something you shove in your ear and talk to someone with. This is a mobile device, not a phone.

It has the Android 2.2 system, etc. etc. You can look all of that crap up on Google.

Speaking of Google, the synchronization with Google contacts was instantaneous. Within two minutes the device was activated and all of my contacts were in the phone and ready to use. That is very awesome. Facebook and Yahoo crap works too.

The process of purchasing the device took two hours. I have a history with, um, Tnirps. It took nine years to clear up this history and it still was not totally clean when we went to the Tnirps store. The Tnirps store was a petri dish of the most odd, obnoxious, and just some of the worst examples of humanity. The chick with the tattoos and the half mohawk was interesting. Then there was the two gangstas that were both stoned solid and bitching about the five minute wait. Disgusting. 

What sucks about the HTC EVO is that Tnirps is charging us an extra $10 a month for 4G service that does not exist in any consistently reliable form within a hundred miles of the city. What a crock. The stand Tnirps takes is, tough cookies; pay the $10 and we might get you 4G service by November.

Just because I am one of the sheep now does not make me not hate the people that drive while using their phones and devices. I saw some woman trying to manage her mobile device while driving on I-70 in rush hour traffic; not paying attention at all to the road and vehicles in front of her.

They should create a special lane for cell phone drivers; just like the special lane for bus and taxi services. Every mile should have a turn off that leads into a car crusher so if the driver is not paying attention they get in a one way lane to a 6 foot square of death. If you get to your destination without crashing into someone or causing an accident behind you or getting demolished by the idiot on the phone behind you and can pay attention enough to not get sucked into the car crusher then congratulations.

Once I get past making backgrounds for my home pages and ring tones using Bender from Futurama I might actually learn how to use the device and have it become a valuable business and social (bwah hah hah) asset.

Wednesday, August 11, 2010


Someone was kind enough to argue with me recently about doing a UNION ALL instead of a FULL JOIN in Oracle (in a MERGE). This same poop chute also posted a spam link that probably led to some Trojan downloader.

To make a long post short, Oracle usually does what it thinks is the best way of fetching data and it is spot on most of the time. Hints and fresh statistics can be used when mother optimizer is not right.

Consider this table:

LEVEL AS mega_id
ALTER TABLE megacrap ADD(CONSTRAINT pk_megacrap PRIMARY KEY(mega_id));

Updated with this MERGE:

MERGE INTO megacrap m
NVL(v1.mega_id,t1.mega_id) AS mega_id
, v1.mega_varchar
, v1.mega_date
FROM megacrap t1
LEVEL AS mega_id
CONNECT BY ROWNUM BETWEEN 1 AND 4100 --- 100 new rows
) v1 ON v1.mega_id = t1.mega_id
WHERE v1.mega_id IS NULL OR v1.mega_id NOT IN(1,2,69) --- delete 1,2,69
) u
ON (m.mega_id = u.mega_id)
m.mega_varchar = u.mega_varchar
, m.mega_date = u.mega_date
WHERE DECODE(m.mega_varchar,u.mega_varchar,1,0) = 0 OR DECODE(m.mega_date,u.mega_date,1,0) = 0
DELETE WHERE m.mega_varchar IS NULL
, u.mega_varchar
, u.mega_date

Guess what the plan looks like: MERGE STATEMENT

    • VIEW
        • VIEW
          • UNION ALL
              • VIEW
                • COUNT
                    • FAST DUAL
            • NESTED LOOPS ANTI
              • VIEW
                • COUNT
                    • FAST DUAL

Mother optimizer decided a UNION ALL was the best option. Now which SQL statement do you want to code and maintain? The FULL JOIN above or the UNION ALL with GROUP BY or UNION ALL with GROUP BY and ANTI JOIN?

Oracle Database SQL Language Reference 11g Release 2 (11.2)

Monday, August 09, 2010

Cell phones while driving, again

I have become sick and tired of people in general. The stupidity and callousness of the masses whittles away at my patience every day, day after day. I grasp for signs of humanity but rarely find genuine goodness in anyone. What makes me write today is something that has caused me to rant before. This time the topic could have caused loss of my health or even my life.

Someone on a cell phone nearly ran me off the road this morning on route 33 on my way to work. She started moving over into my lane forcing me to brake and travel over the rumble strips and partially into the grass at about 60 miles per hour. She must have hit her breaks and slowed down as I recovered back onto the road and accelerated as far as I could away from her and her vehicle. I was sick to my stomach for a good thirty minutes after that. I should have tossed my cookies at work, I probably would have felt better.

Two weeks ago someone in a car with a cell phone stuck to her ear blatantly ran a red light in front of me, coming about two feet from colliding with my car. She gestured to me like as if I did something wrong. She ran the red light from a semi-blind spot for me, from behind a bridge.

The was also an asshole that made a left turn in front of me as I was making a right turn but at least he was not on a cell phone; unless he was using hands-free Bluetooth crap.

Cell phones while driving are dangerous. It does not matter how much technology is thrown at this it will not change how irresponsible some people are. These people are not going to change until they collide with another vehicle or wrap their car around a light pole or kill someone. I do not want to be that statistic.  

If you get a call then either offer to call back or pull over somewhere and talk.

I do not want my tombstone to read "killed by some douchebag driving a Durango while blabbing on a cell phone"

Wednesday, July 21, 2010

Back to school, save big?

I'm looking at the most recent Tramlaw savings newsletter and it had a back to school section aimed at college students. Yes, you new and returning college students can save money on college essentials like MP3 players, Oisiv flat panel LCD TVs and video games.

Huh? What is wrong with that picture.

First question, MP3 players? Get with the times. Most every smart phone and semi-intelligent phone, and flat out dumb phone can play your MP3 library. Why purchase another device; unless you have some Batman utility belt fetish?

Flat panel LCD? How about letting mom and dad have the LCD while you take their hand me down television. Who has the higher percentage of being burglarized and getting that nice, new LCD stolen; someone living on campus maybe?

Video games? Maybe for Christmas but not for new students. I'm also guessing that anyone already into video games has already been nurtured through childhood by five or six game consoles and probably does not need a new one. Serious gamers are going to want a gamer PC (see below) with quad-core, multi-video, SSD in RAID-0 goodness; which they also probably already have.

High ticket electronics for college students should start at laptops/netbooks. In this day and age they are almost a necessity.

In all fairness to Tramlaw, they do have an "off to college" department page that advertises their cheap-ass furniture and essentials like appliances, vacuum cleaners,  bedding, cheap storage and microwaves. That department also advertises the MP3 players, video games and mini-fridges that should be secondary to everything else a college student is going to need like outrageously priced books and lab materials, tuition, rent, fuel, utilities, food (Tramlaw covers that one, though), transportation and communications.

But hey, that is what student loans are for, right?

Thursday, July 08, 2010

Importing .csv into Google contacts

Yesterday I started work on creating a comma separated values file to import into Google for my looming smart phone purchase.

I added a fake contact, Rod Vanhugendong, and all of Rod’s contact information including work, personal, birthday, etc. I exported that contact to a .csv file so I could get column headers and know where to put data.

I put a group of contacts in, saved the file and then imported the data.

My first attempt failed because all of the data got shoved into notes, not the individual fields. I searched Google help and found little from Google but numerous posts from individuals having similar issues. There were two solutions so I tried one and that appeared to work, eventually. The solution that worked put everything into sections with column headings like Section 1-Mobile.

More on this later; I had some time to kill during lunch.

Wednesday, July 07, 2010

Upgrade time, mini rant

This is a little mini-rant. I was looking to upgrade my home workstation from an AMD X2 to a Phenom II quad-core and noticed there are six-core processors out now. I checked the Newegg reviews and had a laugh.

One purchaser had one of them with with 3TB of RAID10 and dual video cards and 8 GB of memory and the system can miraculously handle anything thrown at it. I would hope so. That system has more processing power than all the African nations that qualified for the World Cup it damn well be able to handle everything thrown at it.

Next you have the overclockers that are looking to squeeze that extra decimal out of their Windows 7 Ultimate score by running the processor with the fastest clock speed and the most cores beyond the unit specifications. It is a concept I just don't understand in the post-Celeron era of the 1990's. After the upgraded cooling the cost is more than the next step up in processors with stock equipment.

I priced out some configurations and folks are dropping anywhere from $1,300 to $2,400 every 3 or 4 months when new technology comes out. Where are they getting the money?

Tuesday, June 29, 2010

Hands free texting

I just saw a spot on the local news about hands free texting. You talk into your bluetooth headset and your voice is converted to text so you can text your friends or post an Twitter update on your way home from work.


Why are people so desperate to do anything but drive when they are driving a car? Do we really need more distractions while operating machines capable of causing serious injury and death when used irresponsibly? Does that bitch on her cell phone yelling at me while speeding off in the wrong direction in the Walmart parking lot really need technology to Twitter the deal she just got kitty litter?

Monday, June 28, 2010

Ubuntu 10.04 to WHS shares

Ok so I have this Windows Home Server and I also have an Ubuntu 10.04 desktop that I would like to connect to this Windows Home Server. I tried Nautilus Windows Network and none of the work groups would mount. The original document from the Internet (duplicated about eighty times on different servers) offers no help beyond the "it should be there".

One post on Ubuntu forums suggested making a directory in /mnt for each Windows Home Server then mounting each share with mount -t cifs. To me this seems like a lot of work but I have not found another way to get this to work so install smbfs and create some directories.

sudo aptitude install smbfs
for i in Documents Music Photos Software Videos; do mkdir -p ~/Grouchy-WHS/$i; done

To get shares to mount I had to specify a user and a password in the options but a single line for loop did the trick.

for i in Documents Music Photos Software Videos; do sudo mount -t cifs -o user=grouchy,password=getbent //Grouchy-WHS.local/$i ~/Grouchy-WHS/$i; done

Okay, now all I have to do is type the above line in whenever I want to get to the shares. That sounded stupid, so I tried making a custom panel launcher. No success. I eventually created a script to handle the mount and unmount.

for i in Documents Music Photos Software Videos; do sudo umount //Grouchy-WHS.local/$i; done

Both work from the terminal but I cannot get them to work from the panel application. It seems so simple.

Tuesday, April 27, 2010

2010 Oracle VM, part 2, OEM database

First, create a linked clone of the OEL5.5 VM created in part one; call it something not stupid - like OEL5.5 OEMGRID. Quick note: you might need to bump the memory of this virtual machine to at least 1.5GB else the Oracle Database installer will complain; that did not happen the first time I did this but the second time I was about 3K (yes, 3 measly F'ing K) short of the minimum.

Start the cloned VM and log in as root.

I had the luxury of having a domain name and static IP address so I used the following tool to configure my network and then edited /etc/hosts:

vi /etc/hosts grouch1

I would reboot now so the rest can be handled through ssh and cut and paste (unless you like typing).

Oracle Database Preinstallation Requirements

Most of the chapter two Preinstallation Requirements have been set by installing .rpm package oracle-validated (part one). The oracle-validated .rpm installs all Oracle Database dependancies and also modifies /etc/sysctl.conf, /etc/security/limits.conf, /etc/modprobe.conf and /boot/grub/menu.lst to include kernel parameters. It also creates some of the users and groups. Login as root and execute this to finish up and add an administrative user:

/usr/sbin/groupadd oper
/usr/sbin/groupadd oracle
/usr/sbin/usermod -g oinstall -G dba,oper,oracle oracle
passwd oracle
/usr/sbin/useradd grouchy
/usr/sbin/usermod -g grouchy -G dba,wheel grouchy
passwd grouchy

Next step is to create the directories where software will be installed, for database files and for recovery files (note: in real life production land these could be on separate disks or one gigundous LUN, but we are happy fun test land so who cares)

mkdir -p /u01/app/oracle
mkdir -p /u02/oradata
mkdir -p /u03/recovery_area
chown -R oracle:oinstall /u01/app/oracle /u02/oradata /u03/recovery_area
chmod -R 775 /u01/app/oracle /u02/oradata /u03/recovery_area

A reboot is not required but why not (what better way to test what you screwed up). Now might also be a good time to take a snapshot in VMWare.

Oracle Grid Infrastructure

The OEM VM will not be clustered and will not be using Automatic Storage Management (ASM) so the Grid Infrastructure for standalone server and Oracle Cluster Synchronization Services (CSS) are not required.

Installing Oracle Database

Login to the VM as root. Connect the database .iso image as a DVD or mount the DVD and connect that to the VM. If empty directory /media/cdrom does not exist then make it. As root, mount the CD (mount -t iso9660 -o ro /dev/cdrom /media/cdrom).

I use PuTTY with X11 forwarding enabled in my Ubuntu workstation (that I am running VMWare Workstation on) to connect to the VM as user oracle and do the installation.

First, edit ~/.bash_profile and append the following:

umask 022
export PATH

Do a quick . .bash_profile to execute that script. Next, create some directories in the oracle user home (personal preference):

for i in backup crap expimp junk log run save scripts sql temp
mkdir ~/$i
cd ~/crap

Start installing:

  1. /media/cdrom/runInstaller
  2. Skipped security updates; Yes
  3. Installation Option; Install database software only
  4. Grid Options; Single instance database installation
  5. Product Languages
  6. Database Edition; Enterprise Edition
  7. Installation Location; Oracle Base: /u01/app/oracle Software Location: /u01/app/oracle/product/11.2.0/dbhome_1
  8. Create Inventory; /home/oracle/oraInventory; *1
  9. Operating System Groups;
  10. Prerequisite Checks; all passed
  11. Summary;
  12. Install Product;
  13. Finish

*1 pure laziness on my part because the preinstallation instructions never made nor setup privileges on the default /u01/app/oraInventory but now that I think about it the inventory directory should be outside of the whole OFA structure.

Oracle Database Postinstallation Tasks

I did not do anything here.

Go to Tahiti and check out the installation guide for more information


Corrected errors, added smart-ass comments, etc. blah blah blah...

Monday, April 26, 2010

My 2010 Oracle VM project, part one

As I look back through my blog there have been numerous ambitious attempts to do this but all have stopped before they got started. Pretty sad. Such is life, especially when you yearn to have one instead of fiddling around with software when you should be doing anything but.

Here are the constants:

  • VMWare Workstation 7 on Ubuntu 10.04 64 bit
  • Oracle Enterprise Linux 5 Release 5 32 bit

Here are my goals:

  • One VM with OEL, OEM GRID 11.1, Database 11.2 (EM/RMAN repository)
  • One VM with Database 11.2 using ASM

The first step is to create a VM I can use as a template for my two goal VMs.

Create template OEL5.5 VM:

  1. Create a VM with at least 1GB of memory (2GB if feasible) and one 20GB (to 40GB) virtual disk (bridged network, uncheck automatically connect new USB device, do not connect sound card and printer at power on)
  2. Power on the VM with the Oracle Enterprise Linux 5 Release 5 mounted as an .iso (or physical DVD)
  3. Press ENTER at the "boot:" prompt, next, skip, next
  4. Choose language and keyboard
  5. OEL partitions everything perfectly, no need to review or modify (/boot with LVM with swap and / volumes)
  6. Next, GRUB, DHCP, time zone (uncheck System clock uses UTC)
  7. Set root password
  8. Select Customize now. Next.
  9. Desktop Environments. Uncheck GNOME
  10. Applications. Uncheck all (except maybe Text Based Internet)
  11. Base System. Uncheck Dialup Networking Support and X Window
  12. Base System. Check System Tools; click Optional packages, check oracle-validated
  13. Next, next, installing packages, reboot.
  14. Highlight networking, then run tool
  15. Set Firewall and SELinux both to Disabled
  16. Next
  17. OS installation complete. Login. User root.

Install VMWare tools:

  1. VM: Install VMWare tools (will prompt you to mount something)
  2. mkdir /media/cdrom
  3. mount -o ro /dev/cdrom /media/cdrom
  4. cd
  5. tar -xzf /media/cdrom/VM*tar.gz
  6. cd vmware-tools-distrib
  7. ./
  8. I kept defaults for everything; installation unmounts the VMWare tools
  9. cd
  10. rm -fr vmware-tools-distrib
  11. shutdown -h now

Take a snapshot of the OEL5.5 virtual machine. This snapshot will be used in the cloning process.

In the next parts I will use this template VM to create the OEM GRID and Database VMs


Now on Ubuntu 10.04

Tuesday, April 20, 2010

The Oracle CBO statistical time bomb

Ever have a query or process that runs fine for years and years and years but then one day just slows to a crawl, going from seconds to hours in execution time? If so, you have run into what I call the Oracle CBO statistical time bomb.

Oracle 10g/11g has a daily job that queries for "stale" and missing optimizer statistics on Oracle table/index objects. Those optimizer statistics are used by the optimizer when choosing the best access plan, join types, index and table scans to use for a SQL query. The default for "stale" is if 10 percent of the data have changed (static in 10g, can be changed in 11g).

Let's assume we have this really, really big invoice table that grows at a constant rate over time. This table has an index on the financial quarter. Each quarter roughly 25,000 new rows are inserted, and then updated 3-5 times each (as status changes, etc. whatever), and some idiot needs to clean up a couple hundred bogus rows each month. Here is the table for 2010:

Financial    Total  Insert  Update  Delete  Changed  Percent
Quarter Rows Rows Rows Rows Rows Changed
2010 Q1 1,000,000 25,000 80,000 169 105,169 %10.51 - daily job gathers statistics
2010 Q2 1,024,836 25,000 80,000 169 105,169 %10.26 - daily job gathers statistics
2010 Q3 1,049,672 25,000 80,000 169 105,169 %10.01 - daily job gathers statistics
2010 Q4 1,074,508 25,000 80,000 169 105,169 %09.78 - not stale yet, still have 2010 Q3 statistics

Without current statistics, the Oracle cost-based optimizer thinks one index and access method would be more efficient than a different index and access method (when it really, really, really is not).

I'll need an Oracle Guru to correct me but here is what happens in 2010 Q4:

Your super-optimized, meticulously coded query, queries the invoices by quarter and by another column existing in a massive subquery.

For 2010 Q1,Q2,Q3 the CBO says there are 25,000 rows reading the quarter index and likely less rows using an index on another column (customer number perhaps) if it does the massive subquery first. The CBO got it right and the massive subquery executes first then reads the table via index on the other column.

For 2010 Q4 the CBO says there are no/few rows for Q4 (when there are really 25,000) so read the invoices using the quarter index and then do a nested loop each time on the massive subquery because the cost of the read by quarter index times the cost of the massive subquery is less than the plan it took above (when it really is not). Instead of doing that nested loop on a couple rows (yawn) it does that nested loop massive subquery 25,000 times. Yikes.

What to do? Plan ahead.

Go to Tahiti and research DBMS_STATS in the Oracle Database PL/SQL Packages and Types Reference. Know your tables and diffuse the time bomb before it goes off. Let the daily job handle the stale statistics. Tables that contain an index on a period of time (quarter, year, etc.) or code based on a period of time (i.e. the Spring 2010 line of Mountain Dew with Ginseng and Xanax) should be researched as candidates for statistics gathering outside of Oracle's daily job. Gather table statistics before the big processing periods.


Oracle and SSIS; ORA-00903 during verification

Can you tell I have been suppressing my pain and hate up into a nice little ball, instead of talking about my feelings while laying in some lush, peaceful green meadow, sipping Chamomile tea and popping anti-depressants like candy.

This issue is one of those classic, what the **** is going on here errors.

Here is what I am suffering with:

  • Windows XP and Windows Server 2003
  • SQL Server 2008 Integration Services
  • Native OLE DB\Oracle Provider for OLE DB, Oracle 11g R1 or Oracle 10g
  • OLE DB Source (Oracle) using a query.

The Problem

Getting ORA-00903 table or view not found during execution on verification step on Data Flow OLE DB Oracle source; even though at design time (in the OLE DB source editor dialog) the SQL parses correctly AND pulls back test data in the source AND the query runs successfully everywhere else using the same credentials.

The Solution



Break something.


  • in Oracle, CREATE a VIEW for the query
  • if the query contains a user function you will also need to in Oracle, GRANT EXECUTE ON schema.function TO view_schema_owner WITH GRANT OPTION
  • in Oracle, GRANT SELECT on VIEW to the USER used by SSIS (if necessary)
  • in SSIS, use the Advanced Editor on the OLD DB source; Component Properties; AccessMode = OpenRowset, OpenRowset = SCHEMA.view (standard editor will not pull up the view and you are limited to a drop-down)

There that was painless, right?

I have been told that shutting down Visual Studio sometimes helps (it did not). Shutting down Visual Studio works if you switch databases in the connection manager and then get ORA-00903 table or view not found in the OLE DB source. One co-worker stated he rebooted his machine and then things worked (it did not).

I'm sure I was doing something stupid like trying to use a subquery or SYSDATE but WTF; it parsed, it pulled back test data, it works everywhere but the verification step when debugging it.


Oracle and SSIS; SQL tasks and procedures

I am posting today to hopefully prevent others from the hell I have been through. I really have no idea why I continue to do this as my psyche at this point is so eroded that I should be deriving pleasure from the suffering of others like some serial killer in the making.

Getting Oracle and Microsoft to play nice is like trying to get Sean Hannity and Nancy Pelosi to share a quiet evening together making babies while discussing health care and what to spend their money on. If that reference is dated then consider the choice of wiping with toilet paper or the large grain sandpaper and then using the sandpaper; or perhaps shoving a sea urchin up your nose then pulling it out the other nostril.

Here is what I am suffering with:

  • Windows XP and Windows Server 2003
  • SQL Server 2008 Integration Services
  • Native OLE DB\Oracle Provider for OLE DB, Oracle 11g R1 or Oracle 10g
  • Execute SQL Task trying to execute a PROCEDURE or PACKAGE.PROCEDURE that has input or output parameters

Connection Manager

The only non-default property is RetainSameConnection set to True (connections to Oracle are expensive). Personally, I see no need for resource pooling but that is another post.

Execute SQL Task


{call your_stupid_package.dumb_ass_procedure(?,?)}
BEGIN your_stupid_package.dumb_ass_procedure(?,?); END;
False (usually greyed out, but can be set via Properties, but has no bearing on anything)

Parameter Mapping; Add each parameter:

Variable Name
whatever User::variable you have defined already
Input or Output
Data Type
VARCHAR (I have not been able to get anything else to work yet)
Parameter Name
the left to right sequence number of the parameter starting at zero (so, if you only have one the parameter name is 0)
Parameter Size
the size of the variable


I would cite references but that vast majority of searches dead-end with single posts in forums where some poor slob went begging for help and found none; or sometimes found a reply from another kindred spirit also suffering through the same agony. Those who did have solutions were not passing parameters or were using an Expression to build the query string with parameters (doing their own SQL injection; which also does not help those using output parameters).

The above works, with a couple warnings, but works.

Monday, March 01, 2010

I did go in a ...

Yeah baby, you'd dig it the most. But you know what the funniest thing about Europe is?
It's the little differences. I mean, they got the same s... over there that we got here, but it's just – it's just there it's a little different.
All right. Well, you can walk into a movie theater in Amsterdam and buy a beer. And I don't mean just like in no paper cup, I'm talking about a glass of beer. And in Paris, you can buy a beer at McDonald's. And you know what they call a Quarter Pounder with Cheese in Paris?
They don't call it a Quarter Pounder with Cheese?
Nah, man, they got the metric system, they wouldn't know what the f... a Quarter Pounder is.
What do they call it?
They call it a "Royale with Cheese".
"Royale with Cheese".
That's right.
What do they call a Big Mac?
A Big Mac's a Big Mac, but they call it "Le Big Mac".
"Le Big Mac." What do they call a Whopper?
I don't know, I didn't go in a...

There was probably some reason or list of reasons why Vincent Vega did not go into that fast food establishment in that opening scene from the movie Pulp Fiction. I really cannot remember why we ended up there tonight. It was probably because it was on the same side of the street and I was wanting to avoid a left turn on a very short light from the eight dozen or more better fast food joints across the street. Maybe it was because I remember seeing a steak burger advertised on the television and felt the remote chance it might be palatable.

"We are out of steak."

A burger joint, out of meat. Holy crap. They offer to substitute two hamburger patties which likely meant we microwaved too many frozen hamburger discs and not enough frozen steak blobs and it would not be feasible to make another batch of steak. So instead of steak I get whatever animal they ground up for hamburger.

They are short handed so it takes about five minutes to assemble our meal from the heated storage bins. Yum. Of course, almost half the staff was out back smoking by the dumpster.

We arrive home and examine our food. My shoes looked more edible than the burger. There were two small, wilted pieces of lettuce hidden within the sauce. The bun tasted horrifying. It was just awful. The wife had a fish sandwich which was equally horrid. There was a thin, transparent film of tartar sauce on top of the processed fish wedge. The bun and breading managed to dominate over both the fish ans tartar sauce. That is just unnatural. How can baked and fried dough overpower bold sauce and fish that has endured months of storage and who knows what?

Ninety minutes have passed and the meal feels like a porcupine in my stomach. I am debating whether I should attack that wad of lard with a shot of hard liquor or just drown it in milk or send a salad into the battlefield of stomach acid. It is just awful feeling.

For the two or three people that read this blog once a month or year, I beg you not to go to this place.

Thursday, February 25, 2010

Oracle MERGE, type 2 dimension

I decided to look into this topic while learning SQL Server Integration Services and debating with my caffeine soaked brain if the slowly changing dimension task container would be more efficient than a T-SQL MERGE statement. I began wondering why the hell we were planning on shipping about 40 gigabytes per day out of Oracle into tables in SQL Server that are truncated (emptied) and rebuilt then query to determine what 2 kilobytes worth of changes took place that day. Why on earth would we want to stress out that multi-core, refrigerator sized server when an underpowered server and network and storage can do all the work?

Fact tables and dimensions are part of data warehousing. A slowly changing dimension is a dimension that slowly changes over time (duh). There are three types of slowly changing dimensions:

  • Type 1: Update the old values with the new values (do not give a crap about history)
  • Type 2: Create a new row in the dimension with a new primary (surrogate) key, and optionally update existing rows
  • Type 3: Update the old values with the new values, and add additional data to the table (date of the change, keep one level of history)

You would figure type 3 would come before type 2 but who cares..? This dimension uses a generated surrogate key for uniqueness (a sequence) and dates to keep track of when the row was effective with a NULL end date indicating this row is the current version of the foreign primary key value and all others contain historical data and attributes.

The same logic used to maintain slowly changing dimensions can also be applied to tracking historical changes to tables.

The code that follows assumes only additions and changes are taking place; after all nobody ever deletes data do they? The dimension table is simple, no clustering, no partitioning, bit-map indexes, or compression, and is purely for example:

CREATE TABLE crap_dimension
( dim_pk_surrogate NUMBER
, dim_version_beg_date DATE
, dim_version_end_date DATE
, dim_pk_foreign VARCHAR2(10)
, dim_description VARCHAR2(60)
, dim_bulb VARCHAR2(05)
, dim_sum VARCHAR2(08)

The MERGE SQL statement is best used for inserts and updates. Here is the single statement used for the entire dimension update:

INTO crap_dimension m
--- Step 1 BEG :: Select new and changed data for the USING section of the MERGE
DECODE(v2.dim_upsert_level,1,v2.dim_pk_surrogate,0) AS dim_pk_surrogate
, v1.dim_pk_foreign
, v1.dim_description
, v1.dim_bulb
, v1.dim_sum
--- Step 1a BEG :: Collect data from your OLTP or ODS or POS
crap_id AS dim_pk_foreign
, crap_desc AS dim_description
, crap_poop AS dim_bulb
, crap_turd AS dim_sum
FROM crap_oltp_or_ods_source
--- Step 1a END
--- Step 1b BEG :: Remove identical data attributes already in the dimension
, dim_pk_foreign
, dim_description
, dim_bulb
, dim_sum
FROM crap_dimension
WHERE dim_version_end_date IS NULL
--- Step 1b END
) v1
--- Step 1c BEG :: two rows are needed for updates, update existing and insert new data, LEFT JOIN cartesian product of
, dim_pk_foreign
, dim_upsert_level
FROM crap_dimension
WHERE dim_version_end_date IS NULL
) v2 ON v2.dim_pk_foreign = v1.dim_pk_foreign
--- Step 1c END
--- Step 1 END
) u
ON (m.dim_pk_surrogate = u.dim_pk_surrogate)
--- Step 2 BEG :: If update, mark the end date
SET m.dim_version_end_date = SYSDATE
--- Step 2 END
--- Step 3 BEG :: Insert new data and the new versions of changed data
, u.dim_pk_foreign
, u.dim_description
, u.dim_bulb
, u.dim_sum
--- Step 3 END

I tried to break down the statement into steps.

Step 1a collects data from your ODS or OLTP and can be a simple query or a gigantic, complicated parallel query with a hundred joins and aggregate steps. The important thing to remember is we only want to do this expensive step once. When 1a completes we will have a fairly large result set.

Step 1b subtracts all identical data attribute rows that already exist in the dimension from the previous result set. This should significantly reduce the working set of rows at the expense of a full table scan through our dimension. When 1b completes the working set will contain new data and changed data from the source tables.

Step 1c creates data for updates. If data in the source has changed then two rows will be needed for the dimension; one row to update the existing data making the row historical and one row to insert for the new data making it the current row. The surrogate and foreign primary keys for all current rows are Cartesian joined to values 1 and 2 (one full table scan then hash join instead of two full table scans) and then left joined to the previous result set. New data from the source will not match data and will remain as one row with a NULL surrogate primary key. Changes from the source will match and join to the two rows in the query creating the two rows needed to do the update and insert. The surrogate primary key will be 0 for inserts (assuming the sequence started at 1 or higher) and valued for updates.

Step 1 finishes with rows containing changes to be applied to the dimension. Step 2 updates historical rows simply setting the end date to the current date. Step 3 inserts new data from the source and the new current version of the row for changed data.

More virus crap

It had been a while since I went on a tour through the blogosphere using the next blog link at the top of the page. After stumbling through a lot of foreign crap in languages I did not understand I come to a blog that starts the hard drive going nuts. Oh F.

When the hard drive goes nuts you usually have a virus or something nasty trying to do something to your computer. AVG caught and probably stopped and cleaned the malicious crap. I was going to post some Oracle MERGE code but now I have to wait until a full system scan completes and bitch about things from my netbook.

So here I am watching Olympic curling (Canada versus Switzerland, ladies) and typing this on my netbook with cats jockeying for space on my lap and on the couch around me. I keep getting the same thought in my head: why the hell am I still blogging? Seriously. Why do I share code and my life with the world? Most of my friends have moved on to do their belly-aching in status updates and micro-blogging. Any therapeutic value from blogging can easily be replaced by a good beer or six (Sam Adams Winter Ale, I have a stockpile of it). 

For the untrusted Internet I need to look into programs like Sandboxie or running web browsers inside virtual machines. When I purchase a new computer I will likely be running Linux which still has equal levels of threat, but it seems like fewer viruses are written to exploit Linux and it seems safer.

As I watch the women's Olympic ice hockey gold medal game I wonder if this will be my last blogger rant. Maybe I will start using Google Buzz and join others with one-line status updates like "my computer might be frakked" or "I really hate Microsoft again".

Sunday, January 31, 2010

No time for stupid

I try not to be in a hurry during most of my life. On the roads I try not to rush things especially in rough weather. When developing I try to do things right the first time instead of pushing something out and then dealing with the problems later. There is one instance, however, when I become angry; when my time gets wasted by stupid people.

There should be a gate with a short IQ test guarding the entrance of every self-scan lane at a supermarket. Tonight I picked the self-scan lane with the fewest people in it. One person with two small containers. That seemed reasonable enough. Not for this complete waste of human genetic material. 

The first problem, this dude was eating out of one of the containers. After scanning the first, unopened container he places it on top of the machine instead of into the bag or at least onto the area that measures the weight of the item you just scanned. So he waits until the bagging error message comes up and presses the button to skip bagging.

Next, this idiot tries to scan the container he is eating out of. His fingers are sticky to begin with so he is already limited in dexterity. The bar code is on the side of the container so he then tries to angle the container so it will scan and not spill the contents. It never occurs to this idiot during the eight or nine attempts to scan the container to put the lid back on the container for the two or three seconds needed to scan the damn thing. Eventually, he does get the thing to scan. Does he put it in the bag? No, he puts it on top of the machine and waits the five to ten seconds for the scanner to yell at him so he can press the skip bagging button.

He then starts feeding money into the cash input as the machine repeatedly asked him if he was done scanning crap and was ready to start feeding the thing money. You are probably wondering where the cell phone comes in. It was about ten seconds before the machine locked up and required the cashier to come over and fix things. At least he did not stop screwing up the self-scan machine to talk on his phone but instead placed the phone on top of the machine and talked to it. However, he is now out of cash and wants to start feeding coins into a coil slot with tape on it and a large sign next to it stating the device cannot accept change.

He pouts for a moment and then barks at the cashier to come fix the machine. It takes the cashier about a minute or two to log in to the machine, reset it, eject the cash in the cash feeder and then set his whole transaction to be handled elsewhere to keep things moving. Thank you. What should take twenty seconds on a rough day takes more than five minutes as the lines at the 20 items or less and self-scan extend beyond the edge of the aisles (but are still moving).

I should not be worried about five minutes out of the (hopefully) millions of minutes in my life but I do let it get to me. I hate waiting for stupid crap like that.