Pages

Saturday, December 10, 2005

Jetspeed-2.0 has been released!

Two years in the making and a busy couple last months and there it is. Jetspeed 2.0 final release is out! From the release announcement:

The Apache Portals Jetspeed Team is pleased to announce the final release of the Jetspeed 2.0 Open Source Enterprise Portal. This final release is fully-compliant with the Portlet Specification 1.0 (JSR-168). Jetspeed-2 has passed the TCK (Test Compatibility Kit) suite and is fully CERTIFIED to the Java Portlet Standard.
The Jetspeed team will be presenting the new 2.0 release at ApacheCon US 2005 on December 10th in San Diego.
Jetspeed is a full implementation of the Java Portlet API. Notable features include security components backed by LDAP and database implementations and some robust administration interfaces. Custom portals can be built and deployed using the Jetspeed plugin for Maven. Developers can use the Jetspeed PSML language to assemble portlets and the Apache Portals Bridges project to 'bridge' portals with existing technologies including Struts, JSF, PHP, Perl. For GUI designers, Jetspeed comes with several built-in templates used to decorate portals and portlets. Join the growing community of Jetspeed users and developers at ApacheCon. David Sean Taylor will be presenting a Jetspeed tutorial that shouldn't be missed by anyone interested in the technology.


Features of the Final Release Include:

Standardized:
  • Fully compliant with Java Portlet API Standard 1.0 (JSR 168)
  • Passed JSR-168 TCK Compatibility Test Suite
  • J2EE Security based on JAAS Standard, JAAS DB Portal Security Policy
  • LDAP Support for User Authentication
Foundation Component Architecture:
  • Spring-based Components and Scalable Architecture
  • Configurable Pipeline Request Processor
  • Auto Deployment of Portlet Applications
  • Jetspeed Component Java API
  • Jetspeed AJAX XML API
  • PSML: Extended Portlet Site Markup Language
    • Database Persistent
    • Content Management Facilities
    • Security Constraints
Portal Core Features:
  • Declarative Security Constraints and JAAS Database Security Policy
  • Runtime Portlet API Standard Role-based Security
  • Portal Content Management and Navigations: Pages, Menus, Folders, Links
  • Multithreaded Aggregation Engine
  • PSML Folder CMS Navigations, Menus, Links
  • Jetspeed SSO (Single Sign-on)
  • Rules-based Profiler for page and resource location
  • Integrates with most popular databases including: Derby, MySQL, MS SQL, Oracle, Postgres, DB2
  • Client independent capability engine (HTML, XHTML, WML, VML)
  • Internationalization: Localized Portal Resources in 12 Languages
  • Statistics Logging Engine
  • Portlet Registry
  • Full Text Search of Portlet Resources with Lucene
  • User Registration
  • Forgotten Password
  • Rich Login and Password Configuration Management
Administrative Portlets:
  • User, Role, Group, Password, and Profile Management
  • JSR 168 Generic User Attributes Editor:
    • JSR 168 Preferences Editor
    • Site Manager
    • SSO Manager
    • Portlet Application and Lifecycle Management
    • Profiler Administration
    • Statistics Reports
Web Framework Support and Sample Portlets:
  • Bridges to other Web Frameworks: JSF, Struts, PHP, Perl, Velocity
  • Sample Portlets:
    • RSS, IFrame, Calendar XSLT, Bookmark, Database Browser
    • Integration with Display Tags, Spring MVC
Customization Features:
  • Administrative Site Manager
  • Page Customizer
Portal Design Features:
  • Deployment Jetspeed Portlet and Page Skins (Decorators) CSS Components
  • Configurable CSS Page Layouts
  • Easy to Use Velocity Macro Language for Skin and Layout Components
Development Tools
  • Automated Maven Build
  • Jetspeed-2 Maven Plugin for Custom Portal Development
  • AutoDeployment of Portlet Applications, Portal Resources
  • Deployment Tools
  • Plugin Goals integrated with Auto Deployment Feature
Application Servers Supported:
  • Tomcat 5.0.x
  • Tomcat 5.5.x
  • Websphere 5.1, 6.0
  • JBoss

The release is available for download from the Apache Download Mirrors:
http://portals.apache.org/jetspeed-2/download.html
We hope you enjoy using Jetspeed! Documentation is available at: http://portals.apache.org/jetspeed-2/.

Wednesday, November 09, 2005

Embedding Apache Directory Server

Apache directory server is an embeddable LDAP server written in Java. It is now embedded in Jetspeed-2 which fully supports LDAP for authentication and partially for authorization. The Jetspeed-2 security SPI has been implemented to support LDAP. Embedding Apache directory server has been overall quite a pleasant experience.
The first step consisted in integrating Apache DS with Jetspeed-2 Maven Plugin:
<goal name="j2:_start.ldap">
...
<java classname="org.apache.ldap.server.ServerMain" fork="yes">
<classpath>
<pathelement
path="${maven.repo.local}/${plugin.groupId}/
jars/jetspeed-security-schema-${jetspeed.version}.jar"/>
<pathelement
path="${plugin.getDependencyPath('directory:apacheds-main')}"/>
</classpath>
<arg value="${org.apache.jetspeed.plugin.ldap.conf}"/>
</java>
</goal>
The above code invokes Apache DS ServerMain startup class with the server.xml configuration file parametrized through ${org.apache.jetspeed.plugin.ldap.conf}. As illustrated above, Apache DS is also started with the Jetspeed schema extensions. The pathelement element references jetspeed-security-schema which holds the Jetspeed specific schema extensions. The schema extensions java code is generated using the Apache DS Maven Plugin directory:schema goal. The classes are then compiled and archived as a referencable artifact for the LDAP server. Once the server is started, it is now time to bind to the LDAP server. Jetspeed-2 uses the Sun JDK LdapCtxFactory for its default binding configuration.

Saturday, October 29, 2005

Fostering Tools Communication: Eclipse Application Lifecycle Framework

A few weeks ago, I wrote a blog post comparing both Microsoft Visual Studio Team Server and Eclipse development environments. Since then, I found out about a new Eclipse project that seems quite promising. Eclipse is hosting a new project to develop an Application Lifecycle Framework. A good overview was given by Ali Kheirolomoom at Eclipse World this August. The eclipse ALF purpose is too:
Create a technology framework that will enable a diverse set of vendor tools, irrespective of architecture or platform, to exchange user data, manage business processes and collaborate in support the chosen ALM infrastructure technologies in use by development communities.

The ALF project plans to create a common and extensible domain specific vocabulary to facilitate domain modeling and provide an events and service flows model to enable loosely coupled tools integration. The technology will create a SOA leveraging web
services and web services orchestration to integrate disparate tools sets.
ALF Overview
ALF is designed to build upon the other eclipse tools and to provide additional support for security, web service orchestration, service flow and meta models as illustrated below:
ALF Stack
One example of how ALF could be used is illustrated below:
ALF Use Case
A user adds an issue to an issue tracking system which triggers an event that launches a service flow and determines whether the issue should be added to the Requirement Management System and Project Management System.

ALF plans to develop a meta model vocabulary based on the Zachman framework. The initial focus of the ALF will be on subject areas that cover:
  • Requirements management,
  • Request and issue management,
  • Configuration management and versioning,
  • Business process models
ALF Detailed Meta Model
The ALF appears to me as a key Eclipse initiative which will provide better integrations between disparate tools. It will also go a long way in offering better visibility and metrics at various levels of the application lifecycle.

Saturday, October 15, 2005

Integrating BIRT with Your Application

I recently started to explore BIRT - Eclipse Business Intelligence and Reporting Tool. As illustrated in the following examples available on BIRT's web site, it provides a wide range of reporting and charting capabilities.

One of the features, that I find quite promising is the ability to easily embed the BIRT engine in custom applications. This can easily be illustrated through a basic unit test. The code below illustrates the key elements required to get started:
    /**
     * @see junit.framework.TestCase#setUp()
     */
    protected void setUp() throws Exception
    {
        super.setUp();
        // The directory where the key plugins are located.
        System.setProperty("BIRT_HOME", "C:/.../src/main/resources");
    }

    ...

    /**
     * @throws Exception Throws exception.
     */
    public void testRunReport() throws Exception
    {
        String[] args = {
            // The format
            "-f",
            "html",
            // The output directory
            "-o",
            "C:/.../target",
            // The locale
            "-l",
            "en_US",
            // The encoding
            "-e",
            "UTF-8",
            // The file to generate the report from.
            "C:/.../src/test/resources/helloworld.rptdesign"    
            };
        ReportRunner.main(args);
    }
The BIRT_HOME directory should contains the following runtime plugins required for the embedded engine:
BIRT Plugins

Wednesday, October 05, 2005

Managing the Software Development Process: Microsoft is Getting it Right.

I recently attended a presentation on the upcoming Microsoft Visual Studio Team Suite (MVSTS) and I must say: what Microsoft is coming up with looks very much like an aggregation of the best practices that everyone preaches, all bundled into one very cohesive package. I decided to run a comparison between the tools available as Open Source and how an Open Source stack would compare to MVSTS.

First the Microsoft stack: I found two good resources for describing MVSTS roadmap and getting an overview of MVSTS.
Visual Studio Team Overview

Second the Open Source stack: I decided to focus on the Eclipse set of tools (see references below) and Apache Maven for software project management.
J2EE Open Source Tools

After integrating all the tools out there, the Open Source stack comes fairly close functionally to the MVSTS stack. Unfortunately, integrating all those technologies into one cohesive package requires a good amount of work. I personally feel that this is unfortunate. In my mind it raises some fundamental questions:
  • What purpose are open source foundations fulfilling when developing their product offerings? Is it technology adoption, industry cooperation, technology innovation? Providing a cohesive offering requires making some choices that are difficult in foundations with members with sometimes competing interests. Here Microsoft has a clear advantage. So how can the open source community provide a cohesive tool kit for managing the software development process? Is it a desirable outcome?
  • Can commercial entities leverage a common offering and maintain a coherent strategy and competitive advantage? A lot of the value in commercial offerings comes from the integration of diverse technologies into a cohesive offer. If open source foundations fulfill this role, can commercial entities' offerings still remain attractive? Does this cannibalize product offerings in favor of services?
The answer to most of those questions depends on the type of technology. With regards to tools supporting the software development process; I feel it makes sense to foster collaboration. Providing a more integrated open source offering would serve as a foundation to the open source development ecosystem. The Eclipse foundation has done a wonderful job at doing so, but in a fragmented fashion. This is what Microsoft achieves by investing in its development tool suite. It fosters the adoption of its technology and platform and nurtures its technological ecosystem. I feel that all commercial vendors could benefit with what amounts to a fairly minimal investment. Most of the technology building blocks are already available.

Eclipse References:
- Eclipse UML2 tools
- Test and performance tools
- Testing tools
- Monitoring tools
- Web tools
- SDO tools.

Saturday, September 24, 2005

Idiosyncrasies of java.security.AccessController

As part of cleaning up Jetspeed 2 JAAS RdbmsPolicy, i ran into some not so obvious idiosyncrasies of java.security.AccessController and the differences between doAs(), doAsPrivileged() and whether to pass the AccessControlContext or not.
On the differences between doAs() and doAsPrivileged(), I found a good post of Sun Java Forums:
doAsPrivileged effectively means you are granting
the calling stack your [code's] privileges when executing the code in question. Whereas doAs only associates the subject with the current access control context, all the calling code still requires the permission to be assigned to it (under the subject in question).

Where it gets interesting, is that when implementing a custom policy, and assessing whether the caller is authorized to access the callee, in implies(ProtectionDomain protectionDomain, Permission permission) the protectionDomain does not contain the principals when performing a doAs check. As mentioned in this post on the Java Forum, when the permission check is concerned about the principals in the subject (call to protectionDomain.getPrincipals()) for the security check, the security check should be performed as:
doAsPrivileged(theSubject, anAction, null)

By passing in a null access control context, the caller is essentially saying: "I don't care who called me, the only important thing is whether I have permission when associated with the given subject".

Subtle differences...

Wednesday, September 14, 2005

Making Sense of Identity Management

With the rise of service oriented architecture, maintaining a consistent user identity across multiple enterprise systems is becoming increasingly difficult. In an attempt to address the pain that many large IT organizations go through, the software industry has given birth to an onslaught of standards with the purpose of maintaining a common identity across the enterprise. Jason Rouault from HP has written a great paper that sheds some light on that space: Making sense of the federation protocol landscape. As an introductory reading, I strongly recommend An introduction to identity management as well. I like the following definition for identity management:
The set of processes, tools and social contracts surrounding the creation, maintenance, utilization and termination of a digital identity for people or, more generally, for systems and services to enable secure access to an expanding set of systems and applications.

The following pictures sums it up well from a conceptual standpoint:
Identity Management Overview
In my views, a right identity management strategy can provide a strong competitive advantage to an organization as distributed application or services can leverage a much better known user and therefore increasingly build value added to address their employees, customers, partners, and suppliers needs. As organizations consider service oriented architectures, it is critical to craft an identity management strategy in line with such distributed services.

Monday, September 05, 2005

Key Reports for Monitoring Application Development - Need for Historical Data

When managing distributed software development teams with wide ranging skills sets, code base intelligence becomes critical to ensure the quality of the ongoing development effort. Here are some lessons learned worthwhile sharing:
  • Unit test code coverage should be put in historical context: Tools like Cobertura and Clover provide great unit test coverage reports; however, most default reports provide point in time coverage and as a result become difficult to use as a metrics for the development effort. To create a successful developer testing practice, developers activities should be measured against specific targets. Setting developers unit testing targets is a great practice to foster the creation and development of unit tests as an implicit and routine part of their activity. Historical measurement is critical to be able to manage such activity efficiently. Clover provides such historical coverage report.
  • Unit tests are necessary but beware of bad tests: Enforcing unit test coverage is important to facilitate future development work and improve code quality (see previous post), however poorly written unit tests can provide a fall sense of confidence that proper checks are in place. Unit tests that provide poor assertions checks will result in reasonable code coverage but will not provide the proper checks for guaranteeing the code base quality. In addition to unit test reports, and code coverage reports, unit test code should comply to a specific set of rules as illustrated by the PMD Junit rules.
  • Measure your developers activities: Activity reports measuring development activity can provide great insight on the evolution of a code base and the activity level of various contributors. StatCVS provides a very detailed set of statistic that can be useful to understand and monitor the activities of a large and distributed team.
  • Dashboard and code quality indexes: Dashboard are critical for management to be able to measure and assess the evolution of various components of a large project. Maven provides a flexible plugin for a point in time dashboard. However, in order to properly follow the evolution of a large development project, historical data is important as it provides the ability to identify key areas of improvement as well as measurable targets. Continuous improvement and continuous refactoring often advocated by agile development methodology advocates requires good metrics to measure improvement and justify the benefits.
Those reports provide a sample set of tools that can be useful when managing development teams. However, one key issue for management is to be able to correlate metrics improvement with critical business metrics such as development effort cost savings (shortened features development time lines, decrease bug level, shortened quality control, etc). More thoughts to come on that subject...

Wednesday, August 17, 2005

XML and XSL Reuse: Leveraging XML XInclude with Xerces and Xalan

XInclude is a recommended XML specification from the W3C. It essentially provides an alternative to DTDs external entity references. A good overview of the differences is provided on Elliote Harold's blog. The most appealing reason for using XInclude is that XML includes are fully well-formed documents that can be processed individually.

Let's take a simple example. Let's assume that we have a ContactInfo.xml document:
<?xml version="1.0" encoding="UTF-8"?>
<hrxml:ContactInfo xml:lang="EN"
xmlns:hrxml="http://ns.hr-xml.org/2004-08-02"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../schemas/hrxmlResume-2.3.xsd"
xmlns:xi="http://www.w3.org/2001/XInclude">

<xi:include href="contactinfo/PersonName.xml"/>
</hrxml:ContactInfo>
which includes a PersonName.xml:
<?xml version="1.0" encoding="UTF-8"?>
<hrxml:PersonName xml:lang="EN"
xmlns:hrxml="http://ns.hr-xml.org/2004-08-02"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../../schemas/hrxmlResume-2.3.xsd"
xmlns:xi="http://www.w3.org/2001/XInclude">

<hrxml:GivenName>David</hrxml:GivenName>
<hrxml:FamilyName>Le Strat</hrxml:FamilyName>
</hrxml:PersonName>
Each document is fully well-formed and can be transformed individually. In this example, we can create a htmlPersonName.xslt stylesheet to format a person name:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:hrxml="http://ns.hr-xml.org/2004-08-02"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<xsl:template match="hrxml:PersonName">
<xsl:value-of select="hrxml:GivenName" />
<xsl:text> </xsl:text>
<xsl:value-of select="hrxml:FamilyName" />
</xsl:template>
</xsl:stylesheet>
Leveraging Xalan (2.7.0), we can apply the stylesheet to the well-formed PersonName.xml document.

To do so, we can leverage the Xalan command line utility. In this particular example, we invoke the utility through an ant script as described below:
<java classname="org.apache.xalan.xslt.Process" fork="true" dir="." >
<jvmarg value="-Djavax.xml.parsers.DocumentBuilderFactory=
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl"/>
<jvmarg value="-Djavax.xml.parsers.SAXParserFactory=
org.apache.xerces.jaxp.SAXParserFactoryImpl"/>
<jvmarg value="-Dorg.apache.xerces.xni.parser.XMLParserConfiguration=
org.apache.xerces.parsers.XIncludeParserConfiguration"/>
<arg value="-IN"/>
<arg value="${home.dir}/components/contactinfo/${xml.name}.xml"/>
<arg value="-XSL"/>
<arg value="${home.dir}/styles/html${xml.name}.xslt"/>
<arg value="-OUT"/>
<arg value="${home.dir}/output/${xml.name}.html"/>
<arg value="-HTML"/>
<classpath>
...Your classpath...
</classpath>
</java>
The jvmarg are of particular interest as they enable the processing of XML XInclude with the Xerces parser.

In addition, for rendering the ContactInfo.xml, we can leverage XSL include as follow:
<?xml version="1.0" encoding="UTF-8"?<
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:hrxml="http://ns.hr-xml.org/2004-08-02"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<xsl:include href="htmlPersonName.xslt" />

<xsl:template match="hrxml:ContactInfo">
<xsl:apply-templates select="hrxml:PersonName" />
</xsl:template>
</xsl:stylesheet>
And voila, we have achieved a high level of reusability of both presentation and data components!

Thursday, August 04, 2005

Cost benefits of Unit Testing

Many definitions are available for unit testing but, in simple terms, a unit test is a method for testing the correctness of a particular module of source code. Sounds like something most software development organizations would want to do... In the real world, however, the issue of cost benefits often comes up. It is indeed very hard to talk about the benefits of unit testing without knowing the answer to the following question [1], [2]: How many defects did unit tests avoid, how much time was saved and, how much time and defects will be saved in the future?
Over the next few weeks, I will blog on the cost benefits of unit testing and how to quantify such benefits. Commonly quoted benefits of unit testing are [3], [4], [5], [6]:
  1. Problems are found early in the development cycle.
  2. Code that works now, will work in the future.
  3. New features will not break existing functionality.
  4. Making change becomes easier, as controls are in place.
  5. The development process becomes more flexible.
  6. Implementation design is improved as APIs are forced to be more flexible and unit-testable.
  7. Bringing new developers on board becomes easier and improve teamwork. Unit tests document the code.
  8. The need for manual testing is reduced.
  9. The development process becomes more predictable and repeatable
However, quantifying such benefits is often challenging for organizations. Still, everyone agrees that the cost of fixing bugs or changing software increases exponentially the later issues are uncovered in the software life cycle. In a development process where identifying bugs is the responsibility of the quality assurance (QA) group, the QA group risks to run into
bug indigestion
. Unit testing is critical to prevent unit-level issues to be uncovered later in the software development life cycle. Software methodologies that rely heavily on developer unit-level testing can therefore achieve a much lower cost of ownership throughout the software development life cycle.
Cost Of Change Curves

Enforcing unit-level testing throughout the development process should be a major focus of any management team involved in managing the delivery of a software product or solution. Identifying the optimal amount of unit-level testing to maximize the benefits of writing unit tests is hard to measure. Best practices suggest that the ratio of test code to code under test required to achieve at least 90% code coverage is between 2/1 and 4/1. This means that to thoroughly test a 100-line Java class requires 200 to 400 lines of test code. The higher the unit-level test coverage, the better the quality. This suggests a best case scenario and does not necessarily maximizes the return on writing unit tests. My experience suggests that the benefits of unit testing can be achieved much sooner but also suggests that there seems to be a threshold above which the benefits of unit testing can be most felt. Let's take, as an example one of the projects I recently completed. As illustrated below, the initial phases of QA resulted in a flood of bugs. At the same time, levels of unit tests were insufficient. I would draw a first lesson from this observation:
To maximize, the cost benefits of unit testing; start writing tests early. Focus on "quality" assertions where assertions are performed against relevant data.

As testing went on, I believe that we can clearly identify the point of inflexion where the value of unit tests really shows. As illustrated below, as our unit tests level improved, our development team was able to significantly improve our bugs closing rate without introducing new issues and therefore avoid a bugterial infection. Therefore, the second key lessons:
Good unit-level test coverage allows projects to significantly shorten their bug-fixing cycle. Therefore, resulting in direct cost benefits.

Bug Discovery Stats

Unit Testing Stats

Quantifying cost benefits depends on the type of project under scrutiny, however simple empirical data clearly demonstrates such benefits. As a result if you are a developer, you should be writing unit tests today; if you are a manager, you should be driving adoption of unit-level testing practices. The quality of your software and your ability to respond to your customers demand will significantly improve. Be agile, today!

Friday, July 29, 2005

Some thoughts on JSR 196

There are a few open source security framework out there that follow an SPI model for their security implementation. Acegi is one, Jetspeed security is another one. Both spring based frameworks follow an SPI concept, but the specifics are quite different from JSR 196. In the JSR 196 world, the javax.security.auth.container.AuthContextFactory is used to obtain context objects that encapsulate authentication modules and delegate to the ClientAuthModule or ServerAuthModule given the authentication context (ClientAuthContext or ServerAuthContext). Each authentication context is initialized according to a MessagePolicy that specifies what authentication guarantees the module is to enforce when securing or validating request and response messages within that context. A ServerAuthModule may delegate some of its security processing responsibilities to a LoginModule for JAAS authentication.

Regarding, the management of the authentication modules interaction, I found that comment in ServerAuthContext interesting:
Implementations also have custom logic to determine what modules to invoke, and in what order.

I could be nice to have policies or rules to manage that interaction...

Jetspeed Security - JAAS all the way...

I finally got some time to update Jetspeed security documentation. I still have a little bit of work to do, but I think this is a good beginning and it was badly lacking. Jetspeed 2 fully leverages JAAS for authentication (through the implementation of javax.security.auth.spi.LoginModule) and authorization (through the implementation of a custom java.security.Policy) and provides a flexible security framework with a set of coarse grained services for user management, role management, group management and permission management.

Jetspeed 2 Security Architecture Overview
Jetspeed security SPI provides a pluggable authentication and authorization architecture. I found interesting some of the similarities with Acegi as pointed out by Keith Gary Boyce on Jetspeed user list.
For future releases, I am planning to investigate integration with JACC and with JSR 196. Additionally, Jetspeed provides some nice portlets that provide management features for the security framework.

Thursday, July 21, 2005

It isn't about Free Software

I have been involved in the open source community for a little while as a volunteer contributor and can relate to some of the dynamics that Marc Fleury describes in his post... However, as I develop a better understanding of the OS community dynamics and of how the community operates I can't help but think, why isn't everyone jumping on the bandwagon?

Have you ever worked with a closed product that needed to be extended to address your business needs? Let's put it this way, I have never worked with one that did not need to... but very often given the closed nature of most software products, extending a product behavior results in numerous headaches for development groups. Most people tend to think about open source as Free Software. What a terrible misunderstanding of the promises of the OS movement! I have become convinced that the strength of open source isn't the source (code), it is the community. I hope that's what Johnathan Schwartz meant in his post on Free Software Has No Pirates. Opening a product source code provides many advantages and extends the boundaries of the virtual organization:

- It encourages the development of a strong community.
- It makes products more easily extensible for customers.
- It fosters innovation on top of companies' product offering.
- It provides a virtual engineering team much larger than any organization may ever dream of.
- It is closely aligned with the market, its trends and evolution.

So why are companies so reluctant to open their code up? I strongly feel that the value of most products is not in the hidden "secrets" kept behind their closed source. It is in the integration with other enterprise solutions, in the flexibility that it provides to their customers to address their business needs today and not in the next release... It is in engaging, trusting and leveraging the community to provide users with solutions that meet their needs.

Marc is right Open Source != Free Software, open source = community; its true value is in the community that it nurtures and empowering its user base...

Saturday, July 16, 2005

Agora: A Community Network Visualizer

Wish you could get a better understanding of the dynamics of virtual communities? Who are the key members of the community, how do community members interact with each other? Well, here comes Agora... Stefano Mazzocchi sent an email to the Apache community demonstrating Agora and its usefulness in analyzing online communities. I must say, it is very nice. 3 years of data collected to give you an in-depth view of various communities' interaction. Here is an example of the Jetspeed development community from January 2003 to July 2005. I was curious to see how much I had contributed to the community so I decided to highlight my own name...

Jetspeed 2 Developer Community


if you want to try the tools for yourself, go to the demo posted by Stephano. Or check the latest version of the application from MIT on their Welkin project home page.

Tuesday, July 12, 2005

Jetspeed 2 Build Process Clean Up - Step 2

Warning: Instructions on this post may become outdated, please be sure to visit Jetspeed 2's getting started for the latest documentation.

Step 2 of Jetspeed's build process clean up has been committed. As a result, a few things have changed when build the latest Jetspeed (2.0-M4-SNAPSHOT and above). A summary of the changes made can be found in Apache Jetspeed Jira.

Then new steps to get started are changed as follow:

If you are build off of Jetspeed 2 source code for the first time:

cd ${jetspeed-2-home}
maven initMavenPlugin allClean allBuild

If the Jetspeed 2 maven plugin is installed, then to build the portal and all its components run:
 
cd ${jetspeed-2-home}
maven allClean allBuild

If you are using the Hypersonic SQL database, start the production Hypersonic database by typing

maven j2:start.production.server

Then run quickStart (in seperate window/terminal session):

cd ${jetspeed-2-home}
maven j2:quickStart

This will recreate the DB to deploy into. WARNING This will drop all the tables and data in the production database.

3. Start up Tomcat. With a browser, go to: http://localhost:8080/jetspeed

If you are creating a new Portal Application without the Jetspeed source:

In order to get started with a new portal application that will include a developer's specific portal customization, Jetspeed 2 provides as part of its Maven Plugin a goal that can get you started with your project. To do so:

1. Make sure that the following properties are set in your your ${USER_HOME}/build.properties file:

- org.apache.jetspeed.portal.name: e.g. testportal. The name of your new portal application. This will also be used as the artifactId for your project in the maven repository.
- org.apache.jetspeed.genapp.home: e.g. C:/tools/workspace/testportal. The location where your new portal application should be created.
- org.apache.jetspeed.genapp.groupId: e.g. testportal. The maven pom group id indicates the group location for your project in the maven repository.
- org.apache.jetspeed.genapp.name: e.g. My Test Portal. A friendly name for your new portal.
- org.apache.jetspeed.genapp.currentVersion: e.g. 1.0. The current version for your new portal application.

2. Once the above properties are set, make sure that the Jetspeed 2 Maven Plugin is installed on your local machine. You can install the Jetspeed 2 Maven Plugin as follow:

maven -DartifactId=maven-jetspeed2-plugin -DgroupId=jetspeed2
-Dversion=2.0-M4-SNAPSHOT plugin:download

where

- artifactId: The name of the Jetspeed2 plugin artifact deployed to the maven repository.
- groupId: The name of the group where the Jetspeed2 plugin is deployed in the maven repository.
- version: The version that you want to use. For this functionality, the version should be 2.0-M4-SNAPSHOT or above.

3. Run the Jetspeed2 plugin target for generating a new portal application:

maven j2:genapp.portal

4. Go to the directory where you just created your new portal application and execute:

maven j2:portal.install
maven j2:quickStart

That's it, you are ready to get started with your new portal application.

Google Search on Your Cell Phone!

Maurice Sidi posted a blog on Google SMS. What a great idea! This is a very smart move and it sheds some more light on where the company is going. If this takes off, it could be huge! Think Google gets a lot of traffic today?

How does it work? Basically, you type a basic SMS message that you send to Google and get search results back. Remember those days where you would call 411 and forget to write down the number after being connected. Those days are over. Instead you get an SMS message and keep it for as long as you need it. Try it out for yourself.

1. Start a new text message and type in your search query

2. Send the message to the number "46645" (GOOGL)

3. You'll receive text message(s) with results

Want to learn more about it, check out Google SMS demo.

Wednesday, July 06, 2005

Jetspeed2 Build Process Clean Up

Using J2 in its current form requires an in-depth understanding of how J2 build internals operate.

As an example, an integrator wanting to get starting with J2 will want to start with the portal web application and customize it from there. It should be made easy for integrators to get started with the web application without requiring an in-depth understanding of the various sequences in the build process.

A typical implementation will want to create a project as described below:
\sample-portal
+---\etc Contains the build dependencies
definition.
+---\portal-webapp Contains the portal web application
being built.
+---\src Contains the portal initialization
source (db scripts, etc).

Building the portal in this structure should be possible by leveraging the deployed Jetspeed dependencies:
  • Components: All libraries (jars) required for the runtime operation of the portal engine.
  • Portlets: All web libraries (wars) required for the runtime operation of the portal engine.
Integrator using Jetspeed2 should be able to do so easily and to easily get (through dependencies) the latest versions of the release Jetspeed components (libraries as well as portlets).

The current maven-plugin and portal build implementation rely on the source build (target directories) rather than the dependencies for the assembly of the portal engine, making it more difficult to get quickly started and to keep up with enhanced components.

I started to work on cleaning up the build process with the objective on centralizing all deploy and install activities to the Jetspeed2 maven plugin. This should greatly simplify getting started with Jetspeed2. Work on this issue is being tracked on Apache Jetspeed 2 Jira.

Friday, July 01, 2005

The Business of Software

I just finished reading "The Business of Software" from Michael Cusumano. Overall a well written book on the fundamentals of the software industry. The book focuses essentially on the analysis of the business model for a software company. There isn't really anything striking new in Cusumano's analysis of software companies' business model, but the author does a good job in outlining the choices offered to software companies and how their business model will have to mature as companies and technologies mature.

Essentially, there are 3 choices for a software company:

  • A pure product play (Some would argue that with the advance of open source and the broad adoption that it has been getting lately, that pure plays are getting much more difficult).
  • A mix of products and services.
  • A pure service play.
I found very interesting Cusumano's analysis of a typical enterprise software company revenue over the course of a five year business lifecycle. For every 1 dollar of product license fees, $2.15 dollars can be derived from services and maintenance. That's more than 70% of the cumulated company revenue. In many cases, services on sold products end up being a life insurance against bad economic times. This brings some prospective to the Professional Open Source buzz. After all, a typical software company already ends up generating 70% of revenues from maintenance and services over a 5 year lifecycle. When put in this context, the changes of business model though important, appear less radical that one may have initially perceived. For those doubtful of the viability of the open source business model, I think we get some empirical data making the case for it right there.

Cusumano goes on with his analysis and show how as companies mature, they tend to move towards more of a services model and provides excellent data to illustrate his argument. As we look at the growth of companies like IBM and Oracle, more and more of their revenues are coming from services. This clearly justifies their strategy behind open source where they can capitalize on their strength in services while maximizing the use of their R&D resources.

Finally, I particularly enjoyed Cusumamo's analysis of the factors that make software startup companies successful, with great examples to illustrate his point. Cusumano basically identifies 8 key criteria for assessing software start ups:
  1. The quality of the management team.
  2. Whether the market is attractive and has strong potential.
  3. How compelling is the offering?
  4. How much interest is the offering getting from customers?
  5. Is the company credible?
  6. What is the business model?
  7. How flexible is the management team?
  8. What is the payoff potential?
A great list that applies to most business activity.

Wednesday, June 29, 2005

My First Post

I finally decided that it would be a good idea to share my thoughts and experience on software development, open source, and other matters dear to my heart. So here I am . I hope to share some of the lessons learned and best practices I run into through this blog. More to come in the next few days.