Mar 16 2008

IKVM.NET – Java without installing

Tag: UncategorizedSymon Rottem @ 7:57 pm

Working on the DocBook version of the Castle documentation this weekend I found that to be able to take advantage of the built in syntax highlighting features of the DocBook XSL package I need to use a Java plugin but the XSL processor I’ve been using doesn’t support the plugin. The solution, of course, is to move to a processor that’s Java based but that then means that the build server needs to have a Java runtime installed to execute…or so I thought.

It turns out that there’s this really cool project called IKVM.NET that provides a Java Virtual Machine in .NET that allows you to run Java applications in the .NET Runtime. After some reading and experimenting I managed to get it working, so now I can use that Java based processor without installing Java.

What’s more, this will help with another little problem I’ve been chewing over – the generation of PDFs using Apache FOP. Apache FOP appears to be the most mature Open Source FO processor out there and seems to be the recommended way of converting DocBook into PDF. The kink for me was – you guessed it – it’s another Java package.

Using IKVM can be as simple as executing ikvm.exe in place of Java on the command line. So instead of executing the following…

java -jar myapp.jar

…you can instead use…

ikvm -jar myapp.jar

IKVM also provides utilities for converting Java bytecode to .NET IL which allows you to generate a .DLL from a Java library that can be used directly in a .NET application. Very cool – I recommend having a look if there’s some fantastic Java library out there with no .NET equivalent.

To get all this all you need are a couple of DLLs and executables. Admittedly it’s a little heavy to deploy with your code at 26Mb but it’s nice to know it’s there as an option instead of being forced to install the “official” Java Runtime.


Mar 11 2008

Documenting Castle – Phase One Complete

Tag: UncategorizedSymon Rottem @ 12:58 am

After many hours slaving over the keyboard the existing MonoRail trunk documentation has been transferred to DocBook format! Wahoo! I think I’ve given myself a crick in the neck and a good dose of RSI but hell – it had to be done, right?

The newly proposed structure is in place (apologies for the deviations from my previous post) and the existing content has been copied across but it’s not finished yet – there’s still a whole slew of tasks to undertake:

  • The styles still need to be built to provide a nice look and feel.
  • The output needs to be customized for each of the formats (I’m planning on Single Page HTML, Chunked HTML, HTML Help, VS 2005 Documentation and PDF at this stage).
  • Link endpoints need to be fixed.
  • The generation process needs to be properly automated.
  • The content hasn’t been refactored yet…this comes last.

But progress is being made (and I have a day job too!) Fortunately, Nicolas Vitra – a colleague of mine – is helping out with the PDF generation and adding syntax highlighting to the code samples, so things should start moving faster.

If you want to see the current state of things there’s a version of the docs for review here. Note, however that this is not in any way considered to be release-worthy, it’s just a little sneak peek. The content will not be staying at that address and will be taken offline at some point in the not too distant future, although hopefully not before they go into production.

Please post comments related to the structure or any problems with the presentation. Just remember that the content has not yet been from what’s available on the Castle website so any comments on that score should wait until it’s available in source control so others can get involved in the editing process.

Anyway – enjoy.

Edit: The URL for the documentation has been updated to a new location temporarily. 


Mar 08 2008

Documenting Castle – Tool Friction

Tag: UncategorizedSymon Rottem @ 11:27 am

Working on the Castle documentation is bringing it’s own share of frustrations – scarily enough it’s not actually from wrangling the data but the tools I’m using to do the task. As Ayende is so often stating, friction is something that just shouldn’t be there.

I don’t mind coding XML by hand but I like to at least have an editor that will provide me with colorized markup and handle indenting for me and identify schema violations. Visual Studio 2005 – my development environment of choice – seemed like the right choice…except that the damned thing keeps borking on me with the following dialog when I try to paste data from some other file:

Visual Studio Borked

I have no idea what’s causing this and much googling hasn’t provided me with an answer.

Hmm. Next tool please…let’s try something completely different – XML Mind’s XXE Personal Edition. This isn’t a bad little tool in that is shows DocBook data rendered so that it’s somewhat visual allowing for a WYSIWIG editing. The problem for me is that you can’t get down to basic XML – they use a tree view of the nodes but it’s not straight XML – which screws up the copy paste approach I’m using to cobble together the existing content. Also, it’s a tad on the slow side. Grrr – more friction.

So, for the moment I’m using Eclipse to do my XML editing. Does anyone have any other suggestions on a validating XML editor that supports a multi document interface, auto indenting, tooltips and error highlighting based on the schema or DTD defined in the XML and that is, most important of all, free?

Alternately, tell me how I can get VS 2005 to work.

Drop me a line if you have any suggestions.


Mar 07 2008

Documenting Castle – Proposed Structure

Tag: Open SourceSymon Rottem @ 2:17 pm

This documenting thing is something of a challenge. I’ve been working industriously away on groking the DocBook schema a little better, figuring out the transformation tools and how to automate them while also trying to come up with a structure that makes sense for presenting the data that already exists.

To get myself off the ground I’ve elected to restructure the existing MonoRail 1.0 RC2 documentation first since there are so many people out there currently using it and it makes sense for my team. Certainly it makes sense to get the trunk code documented quickly too, but I know the RC2 stuff better at the moment, so that’s my starting point, like it or not.

Below is the chapter/section layout I’m proposing for MonoRail. This layout covers pretty much everything that’s in the existing user guide documentation and getting started guide. I think it exposes most of the concepts and addresses the concerns of those who are getting to know MonoRail in a top down fashion so the simple stuff is presented first and the concepts follow each other in a relatively intuitive fashion and it can be expanded on fairly easily. Of course, that’s just my take and you may not agree – feel free to provide constructive feedback and I’ll take it into consideration.

I’ll be posting more on the the process I’m going to use for generating the documentation shortly and should have a draft HTML version of the content based on the layout below using the existing content from the website in the next few days, so stay tuned!

1. Introduction
 1.1. Overview
 1.2. Background
  1.2.1. What is MVC
  1.2.2. Convention Over Configuration
 1.3. Why Use MonoRail
 1.4. How It Works
 1.5. Licence Information
 1.6. Support
2. Getting Started
 2.1. Requirements
 2.2. Creating the Project Skeleton
  2.2.1. Using the MonoRail Project Wizard
  2.2.2. Creating the Project Manually
 2.3. Controllers and Views
  2.3.1. Your First Controller and View
  2.3.2. Setting the Layout and Resuce
  2.3.3. Creating the Index View and Action
  2.3.4. Creating the Layout
  2.3.5. Seeing the Results
  2.3.6. Passing Values to the View
  2.3.7. Creating a Rescue
 2.4. Data Binding
  2.4.1. Simple Parameters
  2.4.2. Complex Objects
 2.5. Integrating with ActiveRecord
  2.5.1. Adding Assemblies
  2.5.2. Configuration
  2.5.3. Building the Model
  2.5.4. Initializing the Handler
  2.5.5. ActiveRecord Sacffolding
  2.5.6. Creating a CRUD Page Using DataBind
 2.6. Final Comments
3. Installation
 3.1. Running Under IIS
 3.2. Using Casini
 3.3. Mono with XSP
 3.4. Mono with Apache
  3.4.1. Configuration
  3.4.2. Apache Httpd2
  3.4.3. Application Deployment
 3.5. Deploying to a Shared Host
4. Configuration
 4.1. Formal Definition
5. Controllers
 5.1. Naming Convention
 5.2. Areas
 5.3. Actions
  5.3.1. Default Action
 5.4. Redirecting
 5.5. Data Binding
  5.5.1. The SmartDispatchController
  5.5.2. Other Useful Properties
  5.5.3. Simple Parameter Binding
  5.5.4. Custom Binding
 5.6. Wizards
  5.6.1. Wizard Controllers
  5.6.2. Wizard Action Provider
  5.6.3. Steps
  5.6.4. Nested Actions
  5.6.5. DoNavigate
  5.6.6. Conditional Steps
  5.6.7. The WizardHelper
  5.6.8. Windsor Integration
6. Views
 6.1. Folder Structure Convention
 6.2. Selecting a View to Render
 6.3. Passing Values to a View
  6.3.1. The PropertyBag
  6.3.2. Flash
 6.4. Shared Views
 6.5. Cancelling a View
 6.6. Accessing Values Passed by the Controller
 6.7. View Engines
 6.8. Javascript and Ajax
7. View Components
 7.1. Creating a View Component
 7.2. Using View Components
 7.3. Passing Parameters
 7.4. Block and Nested Sections
 7.5. Built In View Components
  7.5.1. CaptureFor
  7.5.2. SecurityComponent
8. Filters
 8.1. Creating a Filter
 8.2. Ordering
 8.3. Skipping Filters
 8.4. Passing Parameters
 8.5. Block and Nested Sections
9. Layouts
10. Rescues
11. Authentication and Authorization
 11.1. Forms Authentication
 11.2. Filters
 11.3. Using PrincipalPermission
 11.4. The SecurityView Component
12. Helpers
 12.1. Built In Helpers
  12.1.1. AjaxHelper
  12.1.2. DateFormatHelper
  12.1.3. Effects2Helper
  12.1.4. FormHelper
  12.1.5. HtmlHelper
  12.1.6. PaginationHelper
  12.1.7. ValidationHelper
  12.1.8. WizardHelper
13. Resources and Localization
 13.1. Using Resources
 13.2. Setting Up the Current Culure
 13.3. Localization
14. Sending Email
15. Unit Testing
 15.1. The TestSupport Assembly
 15.2. Exposing the Website Application Directory
  15.2.1. Overriding GetPhysicalDir
  15.2.2. External Configuration
16. Integrations
 16.1. ActiveRecord
  16.1.1. Scaffolding
 16.2. Windsor Container
17. Advanced Topics
 17.1. Routing
  17.1.1. Routing
  17.1.2. Root Directory Mapping Workaround
 17.2. Dynamic Actions
  17.2.1. Dynamic Action Providers
 17.3. Scaffolding
 17.4. Extensions
  17.4.1. Custom Session Extension
  17.4.2. Exception Chaining Extension
  17.4.3. Creating Your Own Extensions
 17.5. Service Architecture
 17.6. Custom Bindable Parameters
 17.7. Using Resources to Store Views

Edit: After a comment on the mailing list from Erik Dahlstrand I noticed that somehow I’d forgotten to include the entire section on Helpers in the above structure so I’ve updated the list to include it.


Mar 02 2008

Documenting Castle – The Transformations Begin

Tag: UncategorizedSymon Rottem @ 11:24 pm

It has begun. The process of transforming the Castle user guide and getting started documentation to DocBook format is underway.

The process has so far involved writing a (truly and stupendously horrific) XSLT that transforms the selected parts of the original XML used to generate the Castle website into DocBook format, the creation of a simple batch file feeds all those XML files in each sub-folder to msxsl.exe with the XSLT sheeet to perform the transformation.

I say the XSLT is horrific mostly because my skills in the realm of XSLT authorship sucks like a vacuum cleaner and I’ve had to kludge something together without really knowing what I’m doing…

The really interesting part came from attempting to transform the HTML tables that were embedded in the original XML into the DocBook table format which doesn’t bear nearly enough resemblance to each other as I would have liked. After banging my head against the wall for a couple of hours the problem was solved using parts of someone else’s work who has much better XSLT skills than my own. See? I have no trouble resting on the shoulders of giants.

Regardless of the complete mess that does the processing, the damned thing works. It seems to have created documentation fragments that can be validated against the DocBook schema with only a couple of small exceptions. Yay!

Next step is to start reorganizing the information into logical groupings to present as chapters and subsections.

Wish me luck.


Mar 02 2008

Open Source Documentation

Tag: UncategorizedSymon Rottem @ 12:31 am

I love Open Source software. There are just so many benefits – I was sitting here trying to think of a list and then realized that others have enumerated them, so why should I bother? That said, one of the difficulties I’ve experienced with some OSS projects has been the quality and/or availability of documentation.

One of projects that has been bothering me recently is the Castle stack, or the user reference documentation for MonoRail in particular. My team and I have spent a fair bit of time trying to grok MonoRail with Windsor Integration and have not found it to be an easy affair. This is not because there isn’t enough documentation out there but is, in fact, because it’s hard to find what you need.

Hammett has actually done a great job of writing user documentation for MonoRail (I’m sure there have been other other contributors too) but the presentation of the information that’s there (and there are pages and pages (and pages!) of documentation including examples) is not laid out in a fashion that allows a new user to easily see the relationships or get a good overview.

Because of the frustration I and others have been experiencing I’ve decided to wade into the fray and have a go at restructuring and refactoring some of the documentation to try to address some of these issues. I’m currently leaning toward a hierarchical approach with a table of contents that clearly shows the hierarchy as well as rendering in both HTML and PDF.

To that end I’ve started a thread on the castle-project-devel mailing list to try and get some feedback on redesigning the docs so they meet more people’s needs. There’s already been some great feedback but if you’ve got anything to say on the subject feel free to jump in and contribute!


Feb 28 2008

Zenoss System Monitoring

Tag: Open SourceSymon Rottem @ 2:10 pm

At the moment I’m working on a system that consists of about 45 machines working together, providing a range of services, all of which need to be monitored for availability and that everything is running within measured thresholds. It’s important that if any of these machines or services fail that someone is notified and that if the issue isn’t resolved in a reasonable time frame that some kind of escalation process will ensure others are notified until something is done.

Having no available budget when I began investigating options I started looking at FR/OSS solutions to solve the problem. After a couple of false starts with Nagios and Hyperic HQ (which only missed the cut because it’s free version was missing one particular feature I needed; the ability to schedule repetitive maintenance periods – a pity that because otherwise it looks like an excellent FR/OSS product) I took a look at Zenoss Core and had a nice surprise.

Zenoss does a spectacular job of simplifying the process of adding new machines to be monitored – once the hosts have been SNMP enabled (and WMI enabled, for Windows boxes) you can set up a network in Zenoss and tell it to scan the network for new devices after which it will dutifully add all detected hosts with SNMP.

What’s really nice is that you can simply switch devices from a basic detected profile type to a more specific type (ie, Windows host, Linux host, router, switch, etc.) and Zenoss will do further investigation on the device based on its type. For example, if you designate the device as a Windows host it will query for other Windows related information including the software and services installed on the box. Similarly if you select Linux or Solaris it will perform other OS specific checks, etc.

Also interesting is that it establishes an inventory of the software installed on all the devices so you can determine which machines are running which software. By default each host is re-profiled every 6 hours and if any changes are detected the database is updated and you can be notified of the changes.

Once the hosts have been added Zenoss dutifully harvests issues from system logs, checks for availability of designated processes or services, and tracks values like available memory and processor usage (and yes, even custom data can be collected) over time. Hell, there are event pretty graphs for you to look at. Once the data is coming in notifications can be configured which can be triggered by outages or data exceeding thresholds and they can be sent by a variety of methods (email is one and is certainly the easiest to get running, but there are SMS/paging options amongst others).

Zenoss isn’t perfect, however. It can perform very slowly sometimes due to the way it manages caching data – it looks like it uses all available system memory to the point where it actually uses up the swap space as well. And navigation can sometimes be a pain as you have to move through multiple menus to get to sub groupings of machines unless you’re prepared to type the url to the group by hand.

Regardless, overall I’ve been pretty impressed with what Zenoss can do. There are some features missing from Zenoss Core, but their enterprise version seems to address most of these, and since it’s OSS there’s nothing you can’t choose to do yourself…if you can find the time.


Feb 27 2008

More Configuring NHibernate Caches

Tag: NHibernate,ORMSymon Rottem @ 6:52 am

One of my readers recently asked to see a sample of configuring NHibernate caching through the config file after reading my previous post Configuring NHibernate Caches, so here goes. Please bear with me in case there are any typos as this has been rolled by hand.

In this particular example I’m configuring NHibernate to use SysCache for the second level cache and have done it all in the app.config (or web.config if it’s a web application). Note that this is not a fully complete config file but deals with the parts necessary for this example.


<configuration>

<configsections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate"/>
<section name="syscache" type="NHibernate.Caches.SysCache.SysCacheSectionHandler,NHibernate.Caches.SysCache"/>
</configsections>

<!--
NHibernate specific configuration.
-->
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">

<session-factory>

<property name="hibernate.connection.connection_string">Your Connection String</property>
<property name="hibernate.connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="hibernate.dialect">NHibernate.Dialect.MsSql2000Dialect</property>
<property name="hibernate.connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
<property name="hibernate.connection.isolation">ReadCommitted</property>
<property name="hibernate.cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider, NHibernate.Caches.SysCache</property>
<property name="hibernate.cache.use_query_cache">true</property>

<!--
Sets up class and configuration caching for domain classes.
These cache definitions can be tweaked to change how each class is cached.
-->
<class-cache class="Core.User, Core" region="User" usage="read-write"/>
<collection-cache collection="Core.User.Roles" region="User.Roles" usage="read-write"/>

<class-cache class="Core.Role, Core" region="Role" usage="read-write"/>

</session-factory>

</hibernate-configuration>

<!--
Defines Syscache specific configuration
-->
<syscache>

<!-- Class cache regions -->
<cache region="User" expiration="60" priority="3"/>
<cache region="User.Roles" expiration="60" priority="3"/>
<cache region="Role" expiration="60" priority="3"/>

</syscache>

</configuration>

To explain this in a little more detail, the first section deals with telling the application about the configuration sections that will be included in the config file and what handlers to use to interpret them. This is really just standard .NET confg stuff:


<configsections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate"/>
<section name="syscache" type="NHibernate.Caches.SysCache.SysCacheSectionHandler,NHibernate.Caches.SysCache"/>
</configsections>

Then we deal with the main NHibernate configuration inside the hibernate-configuration section specified in the configsections:


<property name="hibernate.connection.connection_string">Your Connection String</property>
<property name="hibernate.connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="hibernate.dialect">NHibernate.Dialect.MsSql2000Dialect</property>
<property name="hibernate.connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
<property name="hibernate.connection.isolation">ReadCommitted</property>
<property name="hibernate.cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider, NHibernate.Caches.SysCache</property>
<property name="hibernate.cache.use_query_cache">true</property>

The key things to note here are the hibernate.cache.provider_class and hibernate.cache.use_query_cache properties which are indicating that we should be using the SysCacheProvider for caching and that the query cache should be enabled.

It’s important to note that just adding the hibernate.cache.provider_class property and configuring some of your classes will cause those classes to be put into the second level class cache so that when you attempt to load them using ISession.Get or ISession.Load the the database will only be hit if the matching entity is not found in the cache.

What it will not do, however, is provide any optimisation when you attempt to load entities using an HQL or ICriteria query. Queries will still go to the database to get their results unless, of course, you add the hibernate.cache.use_query_cache property to your configuration and explicitly specify that a query should be cachable using the IQuery.SetCachable(true) method – this must be done explicitly for each query you want to be cached.

When a query is cached the entity IDs in the results of a query will be stored along with the query itself in the second level query cache so that if it’s executed again it will take the list of IDs from the cache and will then re-hydrate the entities from the class cache. Because of this the query cache is useless without the class cache.

The next section deals with configuring the how each class should be cached. This is still part of the hibernate config section:


<!--
Sets up class and configuration caching for domain classes.
These cache definitions can be tweaked to change how each class is cached.
-->
<class-cache class="Core.User, Core" region="User" usage="read-write"/>
<collection-cache collection="Core.User.Roles" region="User.Roles" usage="read-write"/>

<class-cache class="Core.Role, Core" region="Role" usage="read-only"/>

Essentially there are two separate caching definitions to work with; the class-cache element, which deals with how a specific entity type should be cached and the collection-cache element that specifies how an entity’s dependent collection should be cached.

In this example I have elected to cache my User and Role entities as well as the Roles collection on the User entity.

Note that for each mapping you can specify a cache region, which provides additional granularity regarding how specific types should be expired from the cache and a usage that can provide some additional optimization based on whether or not the entities will ever be updated or new entities created by the application.

The final section of my example deals specifically with configuring the SysCache caching provider:

<!--
Defines Syscache specific configuration
-->
<syscache>

<!-- Class cache regions -->
<cache region="User" expiration="60" priority="3"/>
<cache region="User.Roles" expiration="60" priority="3"/>
<cache region="Role" expiration="86400" priority="3"/>

</syscache>

Here you can define the cache regions used in the class-cache and collection-cache mappings above as well as any cache regions you decide to use in queries inside your application. Each cache region describes how long items in that region should stay in the cache before being expired so they will be reloaded again from the database and a priority that indicates which classes should be expired from memory first if the available memory is getting too low.

You can, of course, dump everything into a single cache region, but then all entities will be treated equally in the cache. In my case I want the User entities and their Roles collections to be cached for one minute as this information could be updated externally and I want to have the changes reflected in the application fairly quickly. The Role entities, however, will hardly ever change so I’ve chosen to cache them for a whole day before they should be re-obtained from the database.

Hopefully that clears things up a bit. If there are any questions or comments please feel free to post one – don’t be shy!


Feb 27 2008

Layout Change

Tag: UncategorizedSymon Rottem @ 5:46 am

Hello Reader!

You may have noticed a slight change to the layout of this blog – do not be alarmed! This change has been made for your own protection…

In fact, the main reason for the change is that I was getting very frustrated with the narrrow, fixed-width nature of the previous template I was using. I decided I had move to a layout that would not impact readers on a small screen (Nico, that’d be you!) while still allowing those with larger screens (me!) to be able to view the information without pain.

This layout is a stock template from the WordPress archive and may well be temporary pending my finding the time to roll my own.

In the meantime, sit back, relax and try not to panic. :)


Feb 23 2008

NServiceBus is Growing Up!

Tag: Open Source,SOASymon Rottem @ 4:52 am

NServiceBus is an Open Source Enterprise Service Bus framework released not so long ago by Udi Dahan and this week it got to it’s feet and wandered over to it’s new home at http://www.nservicebus.com. The site is still under construction, but it’s great to see a .NET OSS solution so recently released moving ahead so rapidly.

Over the last few months I’ve been evaluating it for use in an application and it’s certainly got a lot going for it. Configuration is not difficult and is planned to become even easier and using it is child’s play. There’s support for long running multi-message sagas, publish-subscribe semantics, pluggable transport layers, a vastly simplified and transport agnostic programming model and more – it’s definitely worth a look if you’re planning on building a scalable solution.

The only thing it’s been lacking so far has been documentation – a few months ago I submitted a patch with some code docs that might be used for generating API docs, but they’re a drop in the ocean and I really did them because I needed to read the code to really understand what was going on under the hood and marking up the methods with some code docs just seemed like a good idea to help me grok the whole thing.

What’s really needed is some guidance and tutorial style documentation and it looks like that’s beginning to get underway with the new site. Regardless, the source package comes with some great sample applications that demonstrate solutions to various common problems and since reading well written code is about the most basic method of revealing intentions the samples can work well to get you off the ground pending those additional docs.

I recently wrote a post called Do I Need Message Prioritisation? in which I discussed NServiceBus and some of the difficulties I was experiencing – I don’t think I gave it enough credit since it may not be the perfect fit for my specific problem, but I can certainly see it’s value and have plans to make heavy use of it in the future.

It’s really nice to have OSS alternatives available that don’t tie you to one specific vendor and I highly recommend NServiceBus as a software project to make friends with if you’re looking at writing scalable applications in an SOA context.


« Previous PageNext Page »