SelfDiagnose and OGNL and check your deployment

June 16, 2008

Some time ago I wrote about selfdiagnose-the-world-according-to-my-atg-application.
Since then SelfDiagnose has added some nifty features. The latest release of SelfDiagnose now adds support for OGNL. This gives SelfDiagnose some extra punch. I can now use constructions like:

<checkvaluematches
    value="${@java.lang.Integer@parseInt(configList.length)}"       
    comment="Number of configured Endeca instances"
    pattern=".*" />

or constructions like

 <checkendecaservice host="${configList[0].host}" 
port="${configList[0].port}"
query="N=0"
comment="Endeca instance 1 connection test"/>

The ${configList[0].port} is a typical OGNL construction.

The CheckEndecaService can also return the Endeca com.endeca.navigation.ENEQueryResults response.
With OGNL you can easily dissect this Endeca response.

This can be extremely handy to query for the latest Forge or pipeline version of an Endeca build in the different ATG environments were you cannot directly access those services! For instance in production.


SelfDiagnose, the world according to my (ATG) application

April 17, 2008

We have all been there, when developing a J2EE application, the environment nightmares. When your stuff goes through the different stages of integration, acceptance, production etc. Stuff just breaksTM, because of misconfiguration. Configuration which you have no control over.

Missing a database table here, a JNDI binding forgotten, a URL not reachable, weird classloading nightmares, because another jar is being used in acceptance.

Here is where SelfDiagnose comes to the rescue. Somehow this little gem gets no press whatsoever. Lately some new tasks have been added to the mix. This blog by Ernest explains something about compile-time dependencies. But more interestingly (to me), SelfDiagnose now contains an CheckAtgComponentProperty task and a CheckEndecaService.
The CheckAtgComponentProperty lets you check an ATG property. I know this can be done with ATG’s component browser as well, but hold on.
The CheckEndecaService will check the availability of the Endeca service.

The combination of these tasks and the chaining of these creates a powerful diagnosis. See the following snippet of code, where first an ATG property is queried which then is chained to the Endeca task. Another nifty SelfDiagnose feature.
This code is heavily customer oriented, but you will get the idea.

    1 <checkatgcomponentproperty
    2     component="/wsp/common/services/search/balancer/connections/EndecaConnection"
    3     property="host"
    4     comment="Endeca Host"
    5     var="eneHost"/>
    6 <checkatgcomponentproperty
    7     component="/wsp/common/services/search/balancer/connections/EndecaConnection"
    8     property="port"
    9     comment="Endeca Port"
   10     var="enePort"/>
   11 <checkendecaservice host="${eneHost}" port="${enePort}" query="N=0"/>

The real cool and not so well understood part about SelfDiagnose in my opinion, is that it will check a bunch of tasks from inside the environment you are executing. This means that the above example will output the ATG configuration and check the configured Endeca instance of the actual environment.
Hitting the selfdiagnose.html url will show:

Endeca Diagnose

I just mentioned the ATG and Endeca tasks, but there is a lot more which can be extremely helpful.
This nifty feature can save some energy when something is misconfigured. Checking the selfdiagnose URL can save a lot of time.


Daring Devious Darryl Dude Needs Feedback and a bit of money along the way

January 17, 2008

Are you living in the Netherlands and need a book, a nice Cd, perhaps a Dvd, the latest Wii game or what have you, use the this affiliate thingabee. You will make Darryl happy.

I know I will use it.

Darryl’s commuting habits need a faster car.


Do not use the ATG ApplicationLogging API

November 4, 2007

In my current project we use atg.nucleus.logging.ApplicationLogging. This is with ATG 2007 on JBoss.
At the start it was decided to use the ATG logging API in stead of Log4J (or
any other Java logging framework). ATG and ApplicationLogging was common practice, why not use it I was told.
Later when I saw the code littered with complicated logging statements where the programmer had to include the method from which the Logging statement was executed I was not so sure.

1 Log4J ConversionPattern

Adding the containing method to your logging statement is not necessary. You can use the Log4J ConversionPattern %M in the PatternLayout.
which is used to output the method name where the logging request was issued.
It should be used with care of course since it is a performance drain. But we can alter this at runtime so that’s no biggie (see 2). But…
ATG logging attaches to JBoss logging, however the %M is lost since the ATG logging method is considered as the issuing method. So the
logDebug logInfo, LogError etc, are reported to be the issuing method, in stead of the actual method. Pretty useless.
This would not have happened if ATG ApplicationLogging was not used, but plain Log4J.

2 Different logging level

A used argument is that you can alter the logging level of the individual components in the ATG admin console.
This is trivial. You can also do this in Log4J.xml with the Log4J categories.

3 Dynamic Logging Level

Another argument: You can change the logging level in ATG without restarting the server.
Look at JBoss Log4JService. This automatically picks up changes to Log4J.

Moral

  • Don’t use atg.nucleus.logging.ApplicationLogging you butcher Log4J features.
  • Don’t think, because you have some patterns which used to work, that that is the best pattern in the future.
    What worked with DAS does not necessarily work with JBOSS.
  • Think.

Bundling external library classes inside a jar

September 13, 2007

Some software suppliers are bundling external classes inside their propriety jars. A good, or should I say bad, example is ATG…..

JAXB conflict

With GlassFish Metro’s wsimport.sh script I generated Java interfaces and other supporting classes from a wsdl file. In a small test project it all worked like a charm.

Then I copied my code to an ATG project which needed my code. My testcase suddenly failed with a java.lang.NoSuchMethodError: javax.xml.bind.JAXBContext.newInstance(..
Ouch.

It took me some time to understand the problem. The problem is that the ATG das2007.1.jar contains a lot of external libraries. For instance Xerces or Xalan. However they changed the namespace to for instance atg.apache.xerces.. So conflicts are less likely.

But it also contains javax.xml.bind.JAXBContext. Grr this is JAXB 1 and my JAX-WS Metro stuff needs JAXB 2. Of course I can change the class-path order and I can do this in Eclipse, so my tests will work.
However ATG specification says that ATG-Required modules will be started up, in the order specified, before any modules started with ATG-Class-Path. So DAS will always be loaded before any custom jars according to the specification… I have not validated this yet, but I’m afraid that the specification is the way it works and thus my code will not work inside the Application container.

This will mean that this lovely solution will not work

And Spring, and CGLib, and Mozilla packages, and Sun packages and IBM BSF, Apache Commons etc etc?

IBM BSF? The Bean Scripting Framework got promoted to Jakarta in 2002 and for some time now it has an Apache namespace and not an IBM namespace. That is some old stuff in the das jar. This is taking “If it aint broke don’t fix it” to a new level.
And does anybody else find it funny that an ATG jar, especially the one containing Nucleus, contains Spring classes. Ok it is just the Spring AOP classes, but still. Nucleus and Spring IOC are birds of a feather. These are just some of the examples of some of the embedded classes.

Why bundle libraries?

I sort of understand that you want to control the versions of external libraries you need, but bundling them in stealth mode is bad practice in my opinion. Especially if you are a framework like ATG. Frameworks don’t live in a vacuum, they need extra custom code and thus clashes are likely. I rather see the jars and documentation for which library versions your product is tested.

I’m not really fluent in Ruby but Ruby’s require_gem with the version argument sounds like a nice feature.