Monday, November 04, 2013

 

Next Generation Digital Signature Service

Doing a from scratch is always an interesting event. As it is, in a way, a confession that your previous attempt reached its limits. The eID DSS emerged from an idea I had in the spring of 2005, leading to the creation of e-contract.be. Although eID DSS resembles much of the original ideas, it was only during the summer of 2008 that I could start shaping it. The original eID DSS features some fairly unique properties.

Two phase signatures

The most important one is that eID DSS supports what I call a proxy signature, or two phase signature. Here the server side calculates the digest value. Next the eID Applet signs the digest value. And finally the server side magically injects the signed digest value into the to-be-signed document. The big advantage of such an architecture is that you can keep the eID Applet very lightweight as all the signature format logic lives at the server side. This is especially important for large scale deployments as we witness these at government sites (federal e-procurement is using eID DSS). It also offers the ability to make the support for different document and signature formats pluggable.

DSS integration protocol

Another interesting feature is that eID DSS, compared to all the other signing boxes and tekendozen, offers a protocol to ease integration of qualified signatures into web flows. Similar to what authentication protocols offer between web applications and identity provider, the eID DSS offers a signature creation and verification protocol.

From eID DSS to DSS

The biggest problem with the architecture of eID DSS is that the two phase signature aspect, that is inherent by design of the eID Applet (and that you certainly want to keep), manifests itself throughout all layers of the eID DSS. Besides the fact that back then Java EE 5 was not powerful enough to express what I needed, I also was not able to find a clean solution to convert this two phase signature design back into the regular JCA way of doing Java crypto. Looking back now, these were two missing technological steps that had a tremendous impact on the way eID DSS shaped.

@-novation starts here

Today I can finally announce the availability of the next generation DSS technology. The end-user portal, basically a demonstrator for the capabilities of the DSS, is available at: https://www.e-contract.be/dss/

Completely written from scratch, the new DSS features a brand new DSS protocol. From a security point of view, this new protocol is just it. The old protocol, as implemented in the original eID DSS, could easily be integrated in an insecure way by the developer. For the new protocol, this is no longer possible given the design of the protocol. As we all know that protocols (and their implementation, especially authentication protocols) are very vulnerable to attacks, this new protocol is a huge step forward.

Furthermore the way signatures are handled inside the new DSS is just so much easier to maintain compared to the preSign/postSign horror. This will allow us to add other signature and document formats without having to think about the DSS architectural twists. As a matter of fact jsignatures, the new signature engine, completely lives outside the DSS code base as a separate project.

This first version acts as a demonstrator of the capabilities of this new technology stack. For the moment we only enabled PDF PAdES LTV signature support. Any feedback on the behaviour or performance of the new DSS is always welcome.

Tuesday, March 13, 2012

 

Java EE 6 Security


Every time that a new major release of the JBoss application server comes out it is always a challenge to see how the new security framework features could fit our needs for the construction of highly secured web applications. In this article I will walk you through every layer of your application and show you how to enable the new security features that JBoss AS 7.1 brings.

Securing the view pages

It all starts with securing the JSF pages of your web application. This can easily be done using JBoss Seam 3.1. The first thing that you want to do is to define the roles that authenticated users can have within your system. In this example we construct a simple @User role as follows.

import org.jboss.seam.security.annotations.SecurityBindingType;
import org.jboss.seam.faces.security.RestrictAtPhase;
import org.jboss.seam.faces.event.PhaseIdType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import java.lang.annotation.ElementType;

@SecurityBindingType
@RestrictAtPhase(PhaseIdType.RESTORE_VIEW)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.FIELD, ElementType.METHOD, ElementType.TYPE})
public @interface User {
}

Next we instruct Seam which pages should be protected under the custom defined role. For this we use JBoss Seam 3.1 Faces. In the following example we protect pages under /user/ so that they require an authenticated and @User authorized user.

import org.jboss.seam.faces.view.config.ViewConfig;
import org.jboss.seam.faces.view.config.ViewPattern;
import org.jboss.seam.faces.security.LoginView;
import org.jboss.seam.faces.security.AccessDeniedView;

@ViewConfig
public interface MyViewConfig {

    static enum MyPages {

        @ViewPattern("/user/*")
        @User
        USER_PAGES,

        @ViewPattern("/*")
        @LoginView("/login.xhtml")
        @AccessDeniedView("/denied.xhtml")
        ALL
    }
}

Via the login.xhtml page you somehow gather the user's credentials within the PicketBox credentials bean. The #{identity.login} action will trigger the Seam Security framework to authenticate the user. This requires an authenticator component to be present. The authenticator will simply use the credentials and set the identity after authenticating the user. Here you probably want to use some EJB3 component defined within your model to perform the actual authentication/authorization. In the following example this is done via the MyAuthenticationBean EJB3 session bean.

import org.jboss.seam.security.BaseAuthenticator;
import org.jboss.seam.security.Authenticator;
import javax.inject.Inject;
import org.jboss.seam.security.Credentials;
import org.jboss.seam.security.Identity;
import javax.ejb.EJB;
import java.util.Set;

public class MyAuthenticator extends BaseAuthenticator implements Authenticator {

    @Inject
    private Credentials credentials;

    @Inject
    private Identity identity;

    @EJB
    private MyAuthenticationBean myAuthenticationBean;

    @Override
    public void authenticate() {
        String username = this.credentials.getUsername();
        ... credential = credentials.getCredential();
        Set<String> roles = this.myAuthenticationBean.authenticate(username, credential);
        if (null == roles) {
            setStatus(AuthenticationStatus.FAILURE);
            return;
        }
        for (String role : roles) {
            this.identity.addRole(role, "USERS", "GROUP");
        }
        setUser(new SimpleUser(username));
        setStatus(AuthenticationStatus.SUCCESS);
    }
}

As you can see the authenticator also adds the roles to the identity.

Securing the controllers

The CDI controllers can now simply use the custom @User annotations to protect their methods,
...

import javax.inject.Named;

@Named
public class MyController {

    @User
    public String myAction() {
        ...
        return "...";
    }
}

Of course we still need to tell Seam Security how to map from the roles to the custom defined @User role annotation. For this we define a new CDI component that manages the actual authorizations.

import org.jboss.seam.security.Identity;
import org.jboss.seam.security.annotations.Secures;

public class MyAuthorization {

    @Secures
    @User
    public boolean authorizeUser(Identity identity) {
        return identity.hasRole("user", "USERS", "GROUP");
    }
}

So whenever Seam (CDI) needs a @User authorization, it will call the authorizeUser producer method.

Securing the model

For most applications you want to have the EJB3 business logic completely separate from the view/controllers. For example, because you have multiple web applications, or you want to have different interfaces towards the business logic (web application, SOAP web services, JSON). Eventually you want to have the same notion of authenticated/authorized users within your model's EJB3 session beans. An important design principle here is that you never can trust the outer layers of your application's architecture. So you want to have the model to re-verify the user's credentials. Or in a more advanced scheme you can generate a custom token (some HMAC or so) as part of the call to MyAuthenticationBean that the view can use afterwards towards the model. This is where we need to activate some security domain on your EJB session beans. First of all we have to propagate the user credentials towards the model. For this we need to define a custom security domain within the JBoss AS 7.1 configuration.

<subsystem xmlns="urn:jboss:domain:security:1.1">
    ...
    <security-domains>
        ...
        <security-domain name="my-security-domain-client" cache-type="default">
            <authentication>
                <login-module code="org.jboss.security.ClientLoginModule" flag="required">
                    <module-option name="multi-threaded" value="true"/>
                    <module-option name="restore-login-identity" value="true"/>
                </login-module>
            </authentication>
        </security-domain>
    </security-domains>
</subsystem>

Via the ClientLoginModule you can pass the user's credentials from the servlet container to the EJB3 container within the JBoss application server. There are several ways possible to login into the my-security-domain-client security domain. One method is to use a custom servlet filter as shown in the following example.

import javax.servlet.annotation.WebFilter;
import javax.servlet.*;
import javax.inject.Inject;
import org.jboss.seam.security.Identity;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;

@WebFilter(urlPatterns = "/*")
public class LoginFilter implements Filter {

    @Inject
    private Identity identity;

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
    }

    @Override
    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
        throws IOException, ServletException {
        LoginContext loginContext;
        if (this.identity.isLoggedIn()) {
            ... user = (...) this.identity.getUser();
            UsernamePasswordHandler usernamePasswordHandler = new UsernamePasswordHandler(user.getId(), user.getCredential());
            try {
                loginContext = new LoginContext("my-security-domain-client", usernamePasswordHandler);
                loginContext.login();
            } catch (LoginException e) {
                throw new ServletException(e.getMessage());
            }
        } else {
            loginContext = null;
        }
        try {
            filterChain.doFilter(servletRequest, servletResponse);
        } finally {
            if (null != loginContext) {
                try {
                    loginContext.logout();
                } catch (LoginException e) {
                    throw new ServletException(e.getMessage());
                }
            }
        }
    }

    @Override
    public void destroy() {
    }
}

Now that the front-end performed a JAAS based authentication we need to use the passed credentials somehow within a security domain dedicated to our EJB3 model. For this we define a custom JAAS login module, which we configure again within the JBoss AS as follows.

<subsystem xmlns="urn:jboss:domain:security:1.1">
    ...
    <security-domains>
        ...
        <security-domain name="my-security-domain" cache-type="default">
            <authentication>
                <login-module code="my.package.MyLoginModule" flag="required"/>
            </authentication>
        </security-domain>
    </security-domains>
</subsystem>

Despite the fact that we defined the security domain globally, the custom JAAS login module can live within our EJB3 model itself. This custom JAAS login module looks as follows.

import javax.security.auth.spi.LoginModule;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.Subject;
import javax.security.auth.login.LoginException;
import javax.security.auth.callback.NameCallback;
import javax.security.auth.callback.PasswordCallback;
import javax.security.auth.callback.Callback;
import org.jboss.security.SimplePrincipal;
import org.jboss.security.SimpleGroup;

public class MyLoginModule implements LoginModule {

    private CallbackHandler callbackHandler;

    private Subject subject;

    private String authenticatedUsername;

    private Set<String> authorizedRoles;

    @Override
    public void initialize(Subject subject, CallbackHandler callbackHandler,
        Map<String, ?> sharedState, Map<String, ?> options) {
        this.subject = subject;
        this.callbackHandler = callbackHandler;
    }

    @Override
    public boolean login() throws LoginException {
        NameCallback nameCallback = new NameCallback("username");
        PasswordCallback passwordCallback = new PasswordCallback("password", false);
        Callback[] callbacks = new Callback[]{nameCallback, passwordCallback};
        try {
            this.callbackHandler.handle(callbacks);
        } catch (Exception e) {
            throw new LoginException(e.getMessage());
        }
        String username = nameCallback.getName();
        byte[] credential = passwordCallback.getPassword();
        Set<String> roles = ... retrieve via some EJB3 session bean via InitialContext ...
        if (null == roles) {
            throw new LoginException("invalid login");
        }
        this.authenticatedUsername = username;
        this.authorizedRoles = roles;
        return true;
    }

    @Override
    public boolean commit() throws LoginException {
        if (null != this.authenticatedUsername) {
            this.subject.getPrincipals().add(new SimplePrincipal(this.authenticatedUsername));
            SimpleGroup rolesGroup = new SimpleGroup("Roles");
            for (String role : this.authorizedRoles) {
                rolesGroup.add(new SimplePrincipal(role));
            }
            this.subject.getPrincipals().add(rolesGroup);
        }
        return true;
    }

    @Override
    public boolean logout() throws LoginException {
        this.subject.getPrincipals().clear();
        return true;
    }

    ...
}

As you can see a JAAS login module is using a two-phase commit design. The task here is to populate the subject using the callbackHandler. During the login we authenticate/authorize the user. This operation may fail if the user's credentials are invalid for example. During the commit we simple commit the authentication transaction. In JBoss AS the EJB3 roles are passed via a Group which is named Roles.

Now we can finally use our security domain to protected the EJB3 session beans within our model. An example of such a protected session bean is given below.

import javax.ejb.Stateless;
import org.jboss.ejb3.annotation.SecurityDomain;
import javax.annotation.security.RolesAllowed;

@Stateless
@SecurityDomain("my-security-domain")
public class MyProtectedBean {

    @RolesAllowed("my-model-role");
    public void myProtectedBusinessMethod() {
    }
}

You can get the caller principal within the session beans via the SessionContext as shown in the following example code.

import javax.annotation.Resource;
import javax.ejb.SessionContext;
import java.security.Principal;

    ...
    @Resource
    private SessionContext sessionContext;

    ... {
        Principal callerPrincipal = this.sessionContext.getCallerPrincipal();
        String callerName = callerPrincipal.getName();
        ...
    }


Context aware input validation


Since Bean Validation is part of Java EE 6, input validation never has been easier before. The only thing that you have to keep in mind is that you have to do the activation of the Bean Validation yourself. As we're interested in input validation on our model session beans, we have to use a custom EJB3 interceptor to activate the Bean Validation on the methods. This is basically a copy of the BeanValidationAppendixInterceptor of OpenEJB.

import javax.annotation.Resource;
import javax.validation.Validator;
import javax.interceptor.AroundInvoke;
import javax.ejb.SessionContext;
import java.lang.reflect.Method;
import org.hibernate.validator.method.MethodValidator;
import java.util.Set;
import javax.validation.ConstraintViolationException;
import javax.validation.ConstraintViolation;

public class BeanValidationAppendixInterceptor {

    @Resource
    private Validator validator;

    @Resource
    private SessionContext sessionContext;

    @AroundInvoke
    public Object aroundInvoke(final InvocationContext ejbContext) throws Exception {
        Class<?> bvalClazzToValidate = ejbContext.getTarget().getClass();
        if (this.sessionContext != null && ejbContext.getTarget().getClass().getInterfaces().length > 0) {
            bvalClazzToValidate = this.sessionContext.getInvokedBusinessInterface();
        }
        Method method = ejbContext.getMethod();
        if (!bvalClazzToValidate.equals(ejbContext.getTarget().getClass())) {
            method = bvalClazzToValidate.getMethod(method.getName(), method.getParameterTypes());
        }

        MethodValidator methodValidator = this.validator.unwrap(MethodValidator.class);

        Set<?> violations = methodValidator.validateAllParameters(ejbContext.getTarget(),
            ejbContext.getMethod(), ejbContext.getParameters(), new Class[0]);
        if (violations.size() > 0) {
            throw new ConstraintViolationException((Set<ConstraintViolation<?>>) violations);
        }

        Object returnedValue = ejbContext.proceed();

        violations = methodValidator.validateReturnValue(ejbContext.getTarget(),
            ejbContext.getMethod(), returnedValue, new Class[0]);
        if (violations.size() > 0) {
            throw new ConstraintViolationException((Set<ConstraintViolation<?>>) violations);
        }

        return returnedValue;
    }
}

Funny that JBoss AS 7.1 doesn't provide this out-of-the-box. This would probably not look good on their benchmarks I guess.

So now we can use the Bean Validation framework to do input validation on our model session beans as follows.

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;
import javax.validation.constraints.NotNull;

@Stateless
@Interceptors(BeanValidationAppendixInterceptor.class)
public class MySessionBean {

    public void myMethod(@NotNull String message) {
        ...
    }
}

Besides the validation annotations that are defined as part of the Bean Validation specification, we can of course also define our own validation annotations and corresponding validators. What we would like to do here is to make our custom validators EJB context aware. So for example, we can do something like:

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;

@Stateless
@Interceptors(BeanValidationAppendixInterceptor.class)
public class MySessionBean {

    public void myMethod(@CheckOwnership MyEntity myEntity) {
        ...
    }
}

Where the semantics of the @CheckOwnership annotation is to check whether the caller principal owns the MyEntity object. The validator corresponding to @CheckOwnership will need to get access to the caller principal so it can indeed check whether the MyEntity object is owned by the current caller. So let's define this validation annotation.

import java.lang.annotation.*;
import javax.validation.Payload;
import javax.validation.Constraint;

@Target({ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Constraint(validatedBy = CheckOwnershipValidatorBean.class)
public @interface CheckOwnership {
    String message() default "ownership error";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
}

Via the @Constraint annotation the Bean Validation framework knows which validator class it must use for validating parameters annotated with @CheckOwnership. Here the validator is actually an EJB3 session bean as shown in the following example.

import javax.ejb.Stateless;
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;
import javax.persistence.PersistenceContext;
import javax.persistence.EntityManager;
import javax.annotation.Resource;
import javax.ejb.SessionContext;

@Stateless
public class CheckOwnershipValidatorBean implements ConstraintValidator<CheckOwnership, MyEntity> {

    @PersistenceContext
    private EntityManager entityManager;

    @Resource
    private SessionContext sessionContext;

    @Override
    public void initialize(CheckOwnership checkOwnership) {
    }

    @Override
    public boolean isValid(MyEntity myEntityParam, ConstraintValidatorContext constraintValidatorContext) {
        MyEntity myEntity = this.entityManager.find(MyEntity.class, myEntityParam.getId());
        String username = this.sessionContext.getCallerPrincipal().getName();
        return username.equals(myEntity.getOwnerUsername());
    }
}

Of course the default validator factory does not support EJB3 session beans for validators.
So we need to define a custom ConstraintValidatorFactory that is also capable of handling EJB3 session bean validators. In the following example we omitted the exception handling to ease reading.

import javax.validation.ConstraintValidatorFactory;
import javax.validation.ConstraintValidator;
import javax.ejb.Stateless;
import javax.ejb.Stateful;
import javax.naming.InitialContext;

public class SessionBeanConstraintValidatorFactory implements ConstraintValidatorFactory {

    @Override
    public <T extends ConstraintValidator<?, ?>> T getInstance(Class<T> tClass) {
        Stateless statelessAnnotation = tClass.getAnnotation(Stateless.class);
        Stateful statefulAnnotation = tClass.getAnnotation(Stateful.class);
        if (null != statelessAnnotation || null != statefulAnnotation) {
            InitialContext initialContext = new InitialContext();
            T validator = (T) initialContext.lookup("java:module/" + tClass.getSimpleName());
            return validator;
        }
        return tClass.newInstance();
    }
}

This custom validator factory can be configured within the META-INF/validation.xml file as follows.

<validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration">
    <constraint-validator-factory>my.package.SessionBeanConstraintValidatorFactory</constraint-validator-factory>
</validation-config>

Friday, February 10, 2012

 

Trust Models

Given some recent events, it might be interesting to highlight the topic of trust models in the world of service oriented architectures (aka SOA).

While PKI is a key enabler for online business, PKI validation may also give a false sense of security that is becoming more and more prominent as companies and organization make there data available via SOA services. One should seriously reflect on how to implement a trust relationship, especially in system-to-system communications. Before throwing in a full blown PKI validation for the establishment of a system-to-system trust relationship, we should ask ourselves the question why it is again that PKI was invented. All to often you hear the argument that PKI allows you to construct a digital entity that can spam multiple public keys over its lifetime. While this is true for textbook cryptosystems, it is not a really relevant argument in practice. Everyone knows that keys tend to live longer than certificates. And if a key of a server gets stolen in the context of a system-to-system trust relationship, you will certainly inform the system-to-system clients yourself instead of waiting for the revocation data of a trusted third party to propagate throughout the impacted systems.
Of course if the relying party is using PKI validation, he/she might be informed about a key compromise event in an automated way by means of the revocation data made available by the certificate authority. But then again, you're never 100% sure that all your clients have implemented PKI validation correctly.
So when and how do you want to use PKI?

PKI was mainly invented to be able to cope with trust establishment under the following premises:

Under these circumstances it makes sense to use a full blown PKI validation. In this case you should be well aware to restrict the trust anchors as much as possible. So if you can limit the trust anchor to, for example the Belgian Root CA instead of GlobalSign, you should do so. This is why we introduced the concept of trust domains within our eID Trust Service SOA product. If you restrict a trust domain to a limited set of trust points and to a limited set of certificate policy identifiers, you have higher assurance that the distinguished names of all authenticated entities used within your system are indeed just like that: distinguished (at least within your application context). This is also the reason why every member states publishes a trusted list in the context of electronic signatures. Via such trusted lists you can scope down the PKI trust model to your specific application context. For example, for the eSignature Service Directive trusted list the application scope is limited to qualified certificates managed on SSCD tokens. For Belgium you find this trusted list at: https://tsl.belgium.be/
On the other hand, if you take as trust points all trust anchors known within standard web browser systems (just to make sure) you end up in a situation where you indeed trust the whole world. Unfortunately we don't live in a love-and-peace utopia where everyone can hug each other and is willing to pass along the peace pipe. The uniqueness of the names assigned to entities via certificates is one of the weaknesses of the current PKI architecture. In the context of SSL they tried to improve this uniqueness by means of a required domain validation as part of the certificate creation process. The result is that in the context of SSL you can indeed take the full set of WebTrusted certificate authorities and are still able to operate with a certain assurance that the other end of the line is indeed who you think it is. At least, that's the idea behind it.

When it comes to the establishment of system-to-system trust relationships, using a full blown PKI validation might not be required. In practice it might be even not desired, as it will most likely trigger more insecure situations where the trust relationships are not tightly enough defined. Let me explain this. The most simple trust model between two entities is based on the fingerprint of the public key (or certificates if you want to) of each entity. Such a scheme always works. If you look at some recent systems like Windows Identity Foundation you might have noticed that even the big guys like to default to such 'primitive' trust models. Why? Simple: developers can hardly screw up a fingerprint check. As long as they foresee a fingerprint rollover mechanism where two fingerprint can be configured (to be able to cope with service certificate renewal events), this model works great. The only downside to this strategy is that you need to follow up the certificate renewal events per relying party application yourself.
PKI validation on the other hand might be theoretically very sound, but given the capability (or uncapability) of most developers, you just might think twice before mentioning 'PKI' to your development team. This was one of the drives behind the jTrust project. Using a full blown PKI validation in system-to-system trust establishment might lead to situations where the developers take the default (Java) PKI validation configuration and blindly accepts what the default PKI validation engine tells him. What is often forgotten is that you need to take the application context into account. The OWASP WebScarab authentication protocol plugins that I've developed come with some security tests that did demonstrate such lax configurations successfully. So believe me, in reality this happens all too often unfortunately.

Only in the context of SSL you can somehow trust the default PKI validation results of your platform. If you look at for example OpenID, you'll notice that the entire security depends on the trust in the SSL PKI validation which takes place during the OpenID association step. Hence the importance to run the OpenID association over SSL. The Diffie-Hellman option was an unfortunate decision as again, it gives a false sense of security. If you're not 100% comfortable with this approach, you can also further restrict the set of trusted entities during the OpenID association step. That's the reason why we patched the OpenID4Java implementation to be able to define your own SSL trust manager. This patch has been merged with the official source code tree and is now part of the 0.9.6 release.

Translating the same strategy as we see with OpenID to other authentication protocols like for example WS-Federation is an interesting exercise. To base your trust on SSL in the case of the WS-Federation web passive authentication protocol requires something similar to the OpenID association step. If you want to follow the WS-* specifications you quickly end up with a WS-Trust STS validation service that allows you to validate SAML tokens that have been acquired via a WS-Federation web passive protocol run. The latest version of the eID IdP product contains such an STS validation service. Here the relying party can submit the SAML token to the STS as follows:
<soap12:Envelope xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">
  <soap12:Header>
    <wsse:Security
      xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecuritysecext-1.0.xsd"
      soap12:mustUnderstand="true">
      <saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"
        ID="assertion-id"
        IssueInstant="2012-02-09T09:27:17.433Z"
        Version="2.0">
        ... The SAML2 assertion received via WS-Federation web passive protocol...
      </saml2:Assertion>
    </wsse:Security>
  </soap12:Header>
  <soap12:Body>
    <trust:RequestSecurityToken
      xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512"
      xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecuritysecext-1.0.xsd"
      xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
      xmlns:wsa="http://www.w3.org/2005/08/addressing">
      <trust:RequestType>
        http://docs.oasis-open.org/ws-sx/ws-trust/200512/Validate
      </trust:RequestType>
      <trust:TokenType>
        http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/Status
      </trust:TokenType>
      <trust:ValidateTarget>
        <wsse:SecurityTokenReference
          xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd"
          wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-tokenprofile-1.1#SAMLV2.0">
          <wsse:KeyIdentifier
            wsse:ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-tokenprofile-1.1#SAMLID">
            assertion-id
          </wsse:KeyIdentifier>
        </wsse:SecurityTokenReference>
      </trust:ValidateTarget>
      <wsp:AppliesTo>
        <wsa:EndpointReference>
          <wsa:Address>
            https://relying.party/landing/page
          </wsa:Address>
        </wsa:EndpointReference>
      </wsp:AppliesTo>
    </trust:RequestSecurityToken>
  </soap12:Body>
</soap12:Envelope>

Via the <wsp:AppliesTo> element the relying party can even further restrict the application context in which the SAML assertion should be considered as being valid.
The eID IdP STS validation service can now answer to the relying party as follows:
<soap12:Envelope xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">
  <soap12:Header>
    <wsse:Security
      xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecuritysecext-1.0.xsd"
      soap12:mustUnderstand="true">
      <wsu:Timestamp xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
        <wsu:Created>2012-02-09T09:27:19.197Z</wsu:Created>
        <wsu:Expires>2012-02-09T09:32:19.197Z</wsu:Expires>
      </wsu:Timestamp>
    </wsse:Security>
  </soap12:Header>
  <soap12:Body>
    <trust:RequestSecurityTokenResponseCollection xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
      <trust:RequestSecurityTokenResponse>
        <trust:TokenType>
          http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/Status
        </trust:TokenType>
        <trust:Status>
          <trust:Code>
            http://docs.oasis-open.org/ws-sx/ws-trust/200512/status/valid
          </trust:Code>
        </trust:Status>
      </trust:RequestSecurityTokenResponse>
    </trust:RequestSecurityTokenResponseCollection>
  </soap12:Body>
</soap12:Envelope>

The STS client can now again default to the standard SSL engine for the implementation of its trust model. Adding such STS validation service to the eID IdP also opens the door to seamless transformation from a web passive scenario (i.e. web browser) to an active scenario (i.e. web services). This feature is key to modern IAM architectures.

So let's summarize the different possible (non-exclusive) trust models that were mentioned:

And interesting variant is one where you combine both fingerprint validation and PKI validation. So you have a very strict set of trusted end-points, and you benefit from automated revocation propagation. Of course you still have to manage the fingerprint rollover.

Friday, May 08, 2009

 

SHA1 collisions now at 2^52

From the presentation of Cameron McDonald, Philip Hawkes and Josef Pieprzyk from Macquarie University and Qualcomm, Australia:

Practical collisions are within resources of a well funded organisation.

OpenPGP prepares a migration off of SHA1, stating:

Start making data signatures and web-of-trust certifications using stronger digests

NIST comments:

Federal agencies must stop relying on digital signatures that are generated using SHA-1 by the end of 2010.

A while ago I had this discussion at FedICT with some OS vendor concerning RSA 1024. I was surprised and some confused that they were all discussing the cryptographic strength of RSA, while SHA1 seems to be a sitting duck. I'm not a crypto analyst myself. I'm merely scratching the surface now with some GNY logic proofs on tunneled authentication protocols and signature schemes (which is quite fun actually), but IMHO attacks on hash algorithms are more likely than anything else if you look at the attention this receives within the academic world.

To what extend are PKI infrastructures and client platforms ready to move to other hash algorithms like SHA2 or RIPEMD? How about the impact on the eID PKI? SHA1 is being used all over the place. Do we need SHA2 versions of all CA certificates? What would it bring us?

Definitely an area of interest that should be given some attention.

Anyhow the eID Applet comes with a challenge freshness verification on the authentication signature (using SHA1, but this is not really relevant as collisions are not important here) and the digital signature operations support SHA1-RSA-PKCS1, SHA224-RSA-PKCS1, SHA256-RSA-PKCS1, SHA384-RSA-PKCS1, SHA512-RSA-PKCS1, RIPEMD128-RSA-PKCS1, RIPEMD160-RSA-PKCS1, and RIPEMD256-RSA-PKCS1. ;)

Saturday, February 28, 2009

 

Could you sign here, please? (cont'd)

Three years after I blogged about the dangers in XML Signatures the W3C finally officially came up with a document highlighting the security hazards of XML Signatures. Check out:

XML Signature Best Practices

This makes for a very, very good reading. I recommend this to all of you playing with SAML assertions, WS-Security, XAdES, XKMS, you name it. Some of you might finally realize why certain people can just walk through your web service security without any problem. :) Quite recently I had to explain to some girl implementing a SAML security module at FedICT why the heck you need to change the principal identifier and recheck the signature digest value for change. Isn't that fun or what? Sometimes I've got the feeling that I'm the only one seeing this kind of, well let me call it, opportunities. As D. told me a while ago in Paris "it takes a sick mind to understand how to break into a system". I looked at him and thought the same. :) And indeed, once you think in a particular way about things, it's fun all over the place.

It's time for general awareness about the dangers in XML Signatures and the W3C is finally detailing on this. My congratulations to the team working on this. Now if someone could find the time to write about how to prevent all the listed attacks using the Apache XML Security Java library I would be even more delighted.

Friday, December 05, 2008

 

JBoss AS 5.0.0.GA versus The Model Is Broken


22:32:04,130 INFO [ServerImpl] JBoss (Microcontainer) [5.0.0.GA (build: SVNTag=JBoss_5_0_0_GA date=200812042120)] Started in 22s:339ms


Java conferences driving the delivery of open source software by commercial companies. The greatest thing since sliced bread. I hope Stuart Cohen did not make a point there when he stated that 'The Model Is Broken'. Is the SLA warning when downloading JBoss software indicating this model glitch? I'm actually afraid for one thing: the paradox "Open-source code is generally great code, not requiring much support" fata morgana. What if companies limit the quality to keep their support contract revenue streams alive? Maybe another reason why not to jump on a new version as soon as it's available. "Commodity usage of commodity software" would make up for a great blog entry.

I will be skipping Devoxx this year, time for some "Java EE what the hell am I doing" self-reflection I guess. Meanwhile we've already pushed out the eID Middleware v3.5, the eID Middleware SDK v3.5 and the eID Quick Installer v1.0 which is also quite fun to help out with. More exiting things on the radar screen within the coming months... difficult do abandon Java EE altogether. So take a shaker, put in Java EE, some eID, and let's see what comes out next. ;)

Friday, May 16, 2008

 

Belgian EID Security

Recently there has been a lot of attention in the Belgian security world about hacking the Belgian Electronic Identity Card. I don't get it where all of the sudden this noise comes from. Everybody who knows how to send APDU's to a smart card can readout the identity file (that also contains the unhashed national number) and the address file of the Belgian EID card without entering the PIN code. This has always been like that for applet version 1.0 and 1.1 that is installed on the Belgian EID card. The card was designed like this, so what's the problem?

IMHO it's all about some companies doing a FUD campaign so they can have bigger influence on FedICT (where the money is). Especially given the situation that the people who knew something about EID at FedICT no longer run the show and have started their own company with the most appropriate name ever: The eID Company. This leaves FedICT in a very vulnerable position. One advice to the new guys: stay cool. But then again, I'm not into politics.

Especially since Java6 things have become very easy as it comes to reading out smart cards. In fact, reading out the Belgian EID card directly using APDU's is in some cases easier to me than using the EID middleware. As it is easier to install Java6 on a desktop machine than it is to install the Belgian EID middleware the choice is pretty straightforward when you're dealing with a virgin desktop system. Just go directly to the card via the PC/SC stack.

As an example, let's readout the national number using the Java6 Smart Card I/O API. First of all we need to setup a connection to the Belgian EID smart card.

TerminalFactory factory = TerminalFactory.getDefault();
CardTerminals terminals = factory.terminals();
List terminalList = terminals.list();
CardTerminal cardTerminal = terminalList.get(0);
Card card = cardTerminal.connect("T=0");
CardChannel cardChannel = card.getBasicChannel();

Now we can send an APDU to the card to select the identity file on the smart card as follows:

cardChannel.transmit(new CommandAPDU(0x00, 0xA4, 0x08, 0x0C,
new byte[] { 0x3F, 0x00, (byte) 0xDF, 0x01, 0x40, 0x31 }));

Reading out the file can be done by using the following statement multiple times:

cardChannel.transmit(new CommandAPDU(0x00, 0xB0,
highOffset, lowOffset, 0xFF);

The identity file itself has a simple Tag Length Value structure. The national number has Tag number 6.

You know, the big problem with the Belgian EID card is that almost everybody forgot about their PIN code anyway. So for an EID enabled application of the first hour to become deployable you're actually forced to use the Belgian EID card without ever invoking any operation (like the compute digital signature APDU 0x00, 0x2A, 0x9E, 0x9A) that requires a PIN code. Even the security pop-up of the EID middleware about some application that will readout your private data from the card might freak out end users this much that they will flood your help desk in no time. Making the big audience to use the Belgian EID will take some time and will require us (security developers, architects, whatever it is you're doing with this freaking card) to lower the security constraints in a controlled way. Don't try to run before you can walk.

Monday, December 17, 2007

 

JavaPolis 2007

Now, I can assure you I had a great time at JavaPolis this year. As JavaPolis steering member and lead for the security track 2007 I had the opportunity to help in determining the focus for this year's security presentations at JavaPolis. As I've received quite some signals from the Identity and Access Management sector over the last year I gave it a swing with the SAML v2.0 and the Liberty Alliance ID-WSF v2.0 specifications. This is where Pat Patterson came into picture. A good speaker who also appreciates the finer Belgium beers, which makes for a perfect combination. Since modern security heavily depends on XML Security I had to have somebody on board covering the Apache XML Security Project. Please welcome Sean Mullan here who did a great job at presenting the current state-of-the-art in Java XML Security and who also shed some light on the future of XML Security via his involvement in the W3C. Finally we also had Erwin Geirnaert covering the OWASP Java Project that tends to close the gap between the OWASP top 10 list and Java application development.

For me JavaPolis started on Sunday evening. Stephan was quite happy that most of the steering members were present for helping out. So after a few hours everything was setup and ready for the big event. Unfortunately for somehow reason I didn't make it to JavaPolis on Monday. Nevertheless JavaPolis felt great on Monday. Thanks Ieniemienie for being who you are. ;)

Wednesday was undoubtedly the most busy day of all. The keynote made for an overflow of the overflow room, so there were a lot of people interested in seeing James doing his thing. The robot act was quite impressive and I'm already looking forward to what toy Sun will bring in next year.

Interesting links to check out:
http://www.oasis-open.org/committees/security/
http://www.projectliberty.org/
http://santuario.apache.org/
http://www.w3.org/Signature/
http://www.w3.org/2007/xmlsec/ws/report/
http://www.owasp.org/

Saturday, August 04, 2007

 

Java EE 6 Wishlist Continued

One of the things I don't like very much in the current Java EE 5 specification is how the Java Persistence API (JPA) named queries are handled. Actually they're not handled at all, and that's the problem. What I mean by that is that you don't have the strong Java typing as safety net when using JPA named queries. Apparently I'm not the only one sensing this as a bad smell. There are already some solutions to this problem. The one I've been using for some time now is to add a static query factory method to the entity itself for each named query that it is defining. For example:

@Entity
@NamedQueries(@NamedQuery(name="byName", query="FROM MyEntity WHERE name = :name"))
public class MyEntity {
@Id
String name;

// ...

public static Query createQueryWhereName(EntityManager em, String name) {
Query query = em.createNamedQuery("byName");
query.setParameter("name", name);
return query;
}
}

The advantage of this approach is that the producer and the consumer of the named query are as close to each other as possible. Thus when one changes the definition of the named query it's easy to locate where the hell it's being used and also change the code over there. Although this approach works it still does not solve the problem that the returned query object is 'stripped' of any possible type information that the named query could yield. The most elegant solution here I've seen so far is based on the Query Interface (see section 20.1.2) of the upcoming JDBC 4.0 Specification, JSR 221. Using this query object design pattern the previous example looks as follows:

@Entity
@NamedQueries(@NamedQuery(name="byName", query="FROM MyEntity WHERE name = :name"))
public class MyEntity {
@Id
String name;

// ...

public interface QueryInterface {
@QueryMethod("byName");
MyEntity getEntity(@QueryParam("name") String name);
}
}

Then when you want to execute the named query you simply do:

MyEntity.QueryInterface queryObject = QueryObjectFactory.createQueryObject(entityManager, MyEntity.QueryInterface.class);
MyEntity me = queryObject.getEntity("Frank Cornelis");

Basically the query object executes code equivalent with:

Query query = entityManager.createNamedQuery("byName");
query.setParameter("name", "Frank Cornelis");
return (MyEntity) query.getSingleResult();

The QueryObjectFactory I've implemented so far is capable of handling the following additional method annotation constructions:

@QueryMethod("all")
List<MyEntity> listAll();

Here the query object framework simply executes code equivalent with:

Query query = entityManager.createNamedQuery("all");
return query.getResultList();

Find methods can be done as follows:

@QueryMethod(value = "byName", nullable = true)
MyEntity findMyEntity(@QueryParam("name") String name);

Here the query object framework executes code equivalent with:

Query query = entityManager.createNamedQuery("byName");
query.setParameter("name", "the runtime name value");
List result = query.getResultList();
if (result.isEmpty()) { return null; }
return result.get(0);

You can even retrieve the original query itself via:

@QueryMethod("veryStrangeQuery")
Query getVeryStrangeQuery();

Besides this you can also let the QueryObjectFactory support update queries via an @UpdateMethod annotation.

This query object design pattern is really the way to handle JPA named queries. It gives you back the strong Java typing you've always wanted and still lets you use the named queries directly if needed. Hopefully we'll see the QueryObjectFactory as part of the JPA specification on Java EE 6. JSR 313 already lists JPA as to be updated technology, so that's looking good so far.

Credits go to Frederic Simon. Although I must say that I don't like the annotations he's using on his blog JPA NamedQueries and JDBC 4.0 since it doesn't allow for reuse of named query, and of course, to be modest, my implementation of the QueryObjectFactory is much more powerful. :)

A little copycat footnote to some of you who actually read my blog: as you can see, giving credit to the person who came up with the original idea isn't that difficult. You should try it also.

Wednesday, June 06, 2007

 

JAX-WS RI is Good, Good, Good!

When it comes to web service frameworks, I can say I've had my share over time. I once started with JAX-RPC for the SOA part of my first big product development I got involved in. We're talking about a distributed DRM system. The team was applying great technology, but the market was not ready for such a product. It was fun to do, no doubt about that. Then I tried out Axis , Axis2, XFire, JBossWS, and now I've finally ended up with JAX-WS RI 2.1.1, deployed within a JBoss 4.2 application container. I analyze: JAX-WS RI is Good, Good, Good!

The JAX-WS web service stack specification finally is mature enough to start implementing all those fancy WS-* related SOA specs that we always wanted to support in our products. The basic architecture is quite easy to grasp and shows some analogy when compared to the servlet container specification. A servlet container basically has notion of three aspects: servlets, filters and listeners. When looking at a JAX-WS container (if I may call it that way) you'll find that endpoint definitions (the classes that you annotate with @WebService) correspond with servlets (that is, very roughly). JAX-WS handlers correspond with servlet filters and the thing that Kohsuke Kawaguchi is doing with his JAX-WS commons project somehow corresponds (again, very roughly) with servlet container listeners (I know, it's more than that, he's doing the factory pattern over there). In the end it's all about proper life-cycle management of the JAX-WS endpoints and the corresponding handler chain that you want to apply over them.

Just like servlet filters can communicate with each other via the request/response object and their attached request and session contexts, so can JAX-WS handlers also communicate with each other and with their serving endpoints via the SOAP context. This SOAP context makes up for a pretty powerful messaging system within a JAX-WS runtime stack. It enables JAX-WS handlers to cooperate to achieve a certain aspect of the client-server communication protocol. For example, you could have one JAX-WS handler adding a SOAP header to the outbound message. If this handler wants to have his generated SOAP header to be signed by the WS-Security JAX-WS handler it could use the SOAP message context to communicate the corresponding SOAP header element Id to the JAX-WS WS-Security handler. A similar pattern can be applied as it comes to verification of WS-Security signed elements. The WS-Security JAX-WS handler could, after successful verification, push the XML element Ids that have been signed correctly onto the SOAP messaging context. That way other JAX-WS handlers later on in the chain can first check whether their SOAP header element has been correctly signed by the WS-Security signature. In my view this is the way WS-Security should be handled. Forget those endless WSS XML configuration files. What you want to do is to manage the WS-Security aspect of your communication protocol programmatically via a highly customizable JAX-WS handler chain.

As already mentioned I particularly like the way JAX-WS RI is going with this new JAX-WS commons project. Here we find an interesting aspecting pattern where you annotate the annotations themselves to attach behavior to them. The first time I saw this pattern was with the Hibernate validation framework. It's nice to see that JAX-WS RI picked this one up and is exploring new possibilities. I for one already implemented an @Injection JAX-WS InstanceResolver annotation that finally made @EJB(mappedName="foobar") to work within my JAX-WS endpoints when deploying my (from WSDL) web services on the JBoss Application Server. Hopefully we'll see some more of such annotation based aspecting showing up in container frameworks.

Friday, April 06, 2007

 

EE 6 wishlist

There is no doubt that Java EE 5 is a big step forward from the J2EE 1.4 specification. The two major improvements are the replacement of the entity beans with JPA for ORM and of course the introduction of Java 5 annotations. You could state that the need for annotations is driven by two reasons. One is to diminish the need for endless XML configuration files. And the second one is to help people stop thinking that aspects can be applied to methods without impacting to code itself. Most EE 5 annotations imply more than just a configuration parameter. For example,

sessionContext.getCallerPrincipal();

only really makes sense on methods annotated with

@RolesAllowed("the_allowed_role")

And in case that the annotation really boils down to pure configuration you can still reconfigure an aspect without altering the annotation parameters within the code via the XML file descriptor overriding mechanism.

With all the recent noise about the start of the JCP of the Java EE 6 specification it's time to make up my personal wishlist for Java EE 6. So here we go...

The thing that I'm missing most in EE 5 is decent support for lifecycle management of your business entities. The problem with a J2EE application is that, after it has started, it just sits there waiting for someone to come in and poke the system alive. Most of the time you don't want to 'manually' make the J2EE application to start breathing. So people came up with different solutions out of which the most container-independent is via a ServletContextListener that fires towards a stateless session bean which in turn could start some timers via the TimerService. An interesting design pattern here is to let the ServletContextListener perform a JNDI listing over some predefined subcontext like for example

MyApplication/startup

If the listener finds a component registered in that JNDI context that is also implementing a Startable interface, it can simply start and stop those components. So to make startable components all you need to do is to implement the following interface:

public interface Startable {
void start();
void stop();
}

and to register your component under the correct JNDI subcontext via the

@LocalBinding(jndiBinding = "MyApplication/startup/fireMeUp")

annotation, which should by the way not be JBoss specific but be part of the EE 6 spec. It would be nice to see all of this being replaced in EE 6 by some simple annotations like:

@PostStart
public void postStartCallback() {
// thanks for starting me like the ServletContextListener
// used to do
}

@PreStop
public void preStopCallback() {
// thanks for notifying me of a shutdown like the
// ServletContextListener used to do
}

on your stateless session beans. By allowing a priority parameter on the PostStart and PreStop annotations you could even do simple start-stop dependency management.

I know JBoss has some EJB 3 extension to define JMX services, but this is not exactly what I'm looking for. These services are singletons that have a lifecycle scoped to the application itself. What I want is the lightweight version of this; I only want to receive notication when the system starts and stops, just like if it was handled via the ServletContextListener within your WARs.

Another big missing thing is defining tasks. A task can be very useful to perform for example database cleanups. Right now if you want some task to be performed at a regular interval you need to play with the TimerService. And since it doesn't support cron-expression you end up using Quartz, or at least its cron expression evaluator. What I would like to see is that you could annotate a method of a stateless session bean as follows:

@Task(name = "A Demo Task", preferredScheduling="0 0 * *")
public void pleaseRunMeEveryNowAndThen() {
// ...
}

The EJB3 container should group all these tasks per application. Then the application administrator could assign each task to a scheduling via some task console that is part of the application server.

While I'm going off: configuration management of J2EE application is another big issue today. How do you capture configuration parameters in your current J2EE applications? You can use the env-entry element within your ejb-jar.xml deployment descriptors. But for most configuration parameters you want something that is more dynamic. Another possibility is to put your configuration somewhere in JNDI. Such an approach works, but you still need to have access to the JNDI tree via some console that is part of the application server.
What I would like to see is that you can annotate session bean fields as follows:

@Config(domain = "MyApplicationDomain1", name = "Initial Amount")
@ConfigConstraint(max = 10000)
@RolesAllowed("admin")
private int amount = 1234;

When the EJB3 container encounters such annotations on a field it should register the configuration parameter in a ConfigService that is application scoped. Each application configuration is divided into different domains. A configuration parameter has a name and an initial value. It can also have value constraints and roles to drive the input validation and RBAC of the configuration console of the application.

And then finally there is input validation. Instead of:

public String concat(String a, String b) {
if (null == a) throw new IllegalArgumentException(...);
if (null == b) throw new IllegalArgumentException(...);
return a + b;
}

wouldn't it be cleaner to just write:

public String concat(@NotNull String a, @NotNull String b) {
return a + b;
}

and let some EJB3 interceptor interpret the method parameter annotations and throw IllegalArgumentExceptions if he feels like. This feature would be very similar to the Hibernate input validation annotations that you can put on your JPA entity fields.

Definitely to be continued.

Monday, January 29, 2007

 

Seam seems seamless on security?

As J2EE Security freelance I get to design/startup develop a lot of systems that require bulletproof security features. Over the years I've noticed that the only way to design secure systems is by keeping the authentication and authorization mechanism as simple as possible. The Keep It Simple, Stupid (KISS) principle must be respected at all cost when it comes to security.

At Sun they also understand this principle when you look at the security framework provided in EJB. In EJB3 the security constraints are expressed via some very simple annotations. Basically you annotate a class or method with @RolesAllowed("the role") to activate the role-based access control (RBAC) on the component. When you're using the JBoss Application Server (AS) you have an extra annotation called @SecurityDomain("name") to mark in which security domain the component is living. Besides the configuration of the security domain via JAAS login modules there is nothing more to it. In practice you can setup the security domain via a JBoss Service SAR package, which you can bundle as part of your application itself. This limits the deployment to one single EAR, which is as painless as it gets.

So far for the EJB architecture. When you look at the servlet front-end you notice another approach as it comes to security. Instead of annotating components, you select via web.xml which web resources require which role. Besides this there's another aspect. Because the servlet container directly communicates with the client-side brower it also has to define the authentication interfacing mechanism itself. In my view the servlet specification is way off in this area. And now, with web frameworks like JSF even the web resources based security role assignments are not making much sense anymore. The solution is quite simple: don't use servlet container security, it sucks.

This has led to a certain security vacuum in JSF application development. This because initially the backing beans weren't EJB session beans and thus had no EJB security aspect applied to them. People had to come up with some custom made security framework to emulate RBAC on the backing beans. This all changes when JBoss came up with Seam. In this component framework they made EJB session beans to act as JSF backing beans. This allows us to apply all EJB aspects onto the backing beans, so we can have EJB RBAC working again. Remember, one of it's strengths is KISS.

In my view the approach should changed as it comes to securing MVC applications like Seam JSF web applications. It does not make sense anymore to secure the view components when you have an MVC model with strict separation between the view an control components. The view components (JSF pages) only (1) format the data that comes from the control components (Seam BBs) and (2) present the possible operations that the user can invoke on the control component (Seam BBs). Thus if you apply the security aspect on the control components (and model component) this should do the trick. There is no added value in securing the shallow view components (JSF pages) anymore. This of course only holds when the view components always have to pass through the control components to acquire data or to invoke an operation. Though this does not take away that the view level cannot have notion of the active principal.

And here I completely disagree with what JBoss is doing in Seam as it comes to security. JBoss wants to push in the usage of their rules engine into Seam to also have security at the view level. That's probably one of the drawbacks of using a very popular open source package that is driven by one big software vendor like JBoss. They notice the success of one of their products (i.e. Seam) and try to gain better visibility of their other not so successful products (Drules, ...) via this successful one. This while it doesn't bring anything to secure the view level of your MVC Seam application anymore. There's also the fact that people are used to secure the view level and will not leave this habit that easy. So we're fed up with a quite complex (check out the Seam tutorial) Seam security module that is completely violating the KISS principle. I'm really interested in how the Seam community will adopt this new security framework. I for one am going to stick for a while with the proven and rock solid EJB RBAC model.

Friday, January 19, 2007

 

Maven2 - One year later

About one year ago I switched from Maven1 to Maven2, leaving behind the Ant-isch way of building my software components. Over this year I've started 3 major SOA projects out of which 2 are new products and a few small, i.e. < 20k LOC, projects. Was it worth switching to Maven2? Definitely, though obviously it didn't go as a free lunch. It is very clear that the setup of a software factory using SubVersion, Continuum and a Maven repository is key to manageable software projects. You need to have version control, regression control and release management via a shared software repository. It is reassuring to see that the big players like Sun and JBoss also embrace these principals, and even are/start using Maven2 public repositories to publish the latest releases of their components under development. By sniffing the Sun Maven repository you can even predict releases of their major products like JAX-WS RI.

What Maven2 is still missing is the notion of J2EE dependency contexts. The dependency scoping is not expressive enough in my view. It's very difficult to express, for example, that if dependency X is provided by the EAR, that it should not go into the WAR. And if the container provides X, it should even not go into the EAR at all. Of course some of there rules are very dependent on the class loader behaviour of the container (which is a lot of fun in JBoss AS) wherein you want to deploy your application. Maybe that's why this problem has not yet been tackled to the full extend it should have.

I also get to hear quite often that Maven lacks documentation. This is the general case for most Free Libre Open Source Software (FLOSS) out there, simply because there is no legacy marketing engine behind these projects that forces the developers to produce tons of documentation. "Use the source, Luke." is most of the time the only way to understand how things work. And I for one prefer it that way since it prevents developers from hiding behind bad excuses.

Friday, December 15, 2006

 

JBoss at JavaPolis

This morning I had the privilege to see Marc Fleury in action at JavaPolis. This guy radiates arrogance all over the place, which makes him really the type of character to go an chase even more arrogant people from Novell or Microsoft. A big Bravo for that. He spoke about open source business models, which was very interesting, given the situation I'm in right now. With JBoss, he was at the right moment at the right spot. Arrogant, but owns respect. Not because of his personality, but because JBoss products just works. No more, no less.

Saturday, December 09, 2006

 

The YAGNI Application Server

Everybody knows about the You Ain't Gonna Need It principle, aka YAGNI, in software development. When developing some component one should never add methods that you don't need right now in another component that depends on the component under development. Although it's very tempting to start elaborating on the EJB3 entities and data access objects (DAOs) since it's a direct mapping from your beautiful domain model to Java, one should save its gun powder until the real battle pops up. This implies a top-down approach when adding a new feature to your application. You start from your user interface's view component (hopefully for you a Facelets JSF page) and add some methods to the connected control component (again, hopefully for you a Seam EJB3 bean). From within this bean you enter other EJB3 components representing the domain model in one way or another. Of course during development you always keep your domain model in mind and try to evolve the current implemented model towards your ultimate theoretical domain model, but always respect the twists that reality imposes on the theoretical domain model by means of your walking skeleton application.

YAGNI makes sense when applied to software development, i.e., when applied to code you write yourself. Never apply YAGNI to the selected frameworks on which your application will be build. If you need to shoot down a fly, take the cannon. Of course you just put enough gun powder in the cannon to shoot down the fly. But make sure that it's a cannon that you're learning to control, since you never know what will show up next on the radar screen.

Applied to software development, run enterprise applications in enterprise containers like JBoss Application Server. Else you end up in a situation that I could witness a few years ago. On this project the lead developer also applied YAGNI to the framework selection. So at the beginning of the project (that was about developing a highly distributed SOA product) there was no need for a container at all since there was only one component, x.y.z.Main with one method called main. So plain J2SE could do the trick. Of course after a while dependency injection would be a nice framework feature to have, since manual injection was boring. So Nanocontainer was introduced into the project. It worked, and was perfectly inline with the YAGNI-principle. After a while we needed role-based access control (RBAC). No problem here, via DynAOP that would be used with Nanocontainer, we would write our own security aspect, which was doing something similar to @RolesAllowed("user"). Believe me, this was really cool to write. And then after a while we needed transaction support. No problem here, again we would write our own transaction aspect, which was emulating @TransactionAttribute(TransactionAttributeType.REQUIRED). A little later in the project we also needed some of the other TransactionAttributeType semantics (TransactionAttributeType.REQUIRES_NEW), and we also needed to make the messaging part of the transaction (something like JMS using a ConnectionFactory field annotated with @Resource(mappedName="java:/JmsXA")) to prevent certain data races. At that point we already had something I call the YAGNI Application Server that was hosting our application. Basically we could, besides selling our application, also start selling our YAGNI Application Server. This was a great team of developers since, for the price of one product, we achieved to construct two products. Of course the YAGNI Application Server was missing some features like elaborate transaction support and transacted messaging and was not really according to some standard specifications. Another big missing feature was J2EE deployability. Because different technologies were mixed and glued together that were not designed to fit together in the first place, we had to write endless scripts to deploy and start the YAGNI Application Server. Initially relative simple Bash scripts were used for this, together with a Groovy script to manage the configuration of the application. But since the proxy-client also wanted our application to run on Windows, the lead developer then introduced a hugh amount of Ruby scripts into the project. He started to model the entire deployment domain in Ruby. After a while the Ruby deployment and integration test framework was as complex as the application itself. To keep focussed on the application itself, and not on the YAGNI Application Server product, we decided to switch over to the JBoss Application Server that has all of these features out of the box.

Now here the lead developer came up with an even more funny proposal. We were not going to rewrite the components out of which our application consisted, oh no, we were going to make the YAGNI Application Server to run within the JBoss Application Server. This was called phase one. As it was a highly Agile project the goal was to always have a "working" system. Heh, this reminds me of a one-liner of one of our fellow developers: Agile software development, it's like fuck first, then think about it. Although I strongly believe in an Agile approach, one should consume it with certain moderation, like with beer or women. Now, during the second phase we were going to rewrite every aspect of the YAGNI Application Server, like for example the transaction aspect, to use the JBoss Application Server transaction manager. The 3th phase would be a rewrite of the YAGNI Application Server aspect interface to look exaclty like the JBoss Application Server aspect from a point of view of the application components. And during the last phase we would remove the YAGNI aspect components from between the application components and the JBoss Application Server. This is where I learned to argue against bad ideas.

This project is where I also learned to appreciate the JBoss Application Server. Besides being a J2EE container, it also offers a nice JMX-based architecture which allows you to manage deployability of your applications over JMX and this all from within Java itself. So there was no longer a need to mix Bash, Ruby, Groovy, Jelly and Java. We could now drive the entire build (Maven2) and deployment process (JMX) from within one language: Java. This is a principle they even already understood at Microsoft: use one language for everything.

I own a lot to this lead developer. This guy learned me all about how to organize projects. The company for which we were developing this product was at that time unfortunately in a situation that had a big impact on some of us. Never leave behind a broken window, I guess.

Friday, December 08, 2006

 

J2EE Wannabee

Recently I had a discussion with someone trying to focus on J2EE security, as I am. In his project they proposed this wonderful new J2EE AOP security architecture that made me freeze over my beer for 30 seconds (we were sitting in a bar, hence the beer). As all team members on this project are very new to J2EE, including this person with whom I had some beers, nobody really notices what the fuck they were about to create: the perfect J2EE wannabees. To pass a SAML assertion for authentication, they were going to create an EJB3 interceptor. They all notices this is J2EE AOP, which probably is why it felt as a good thing to do. This interceptor should check on every method's first parameter. And if it's of the SAML class whatever fuck-type, they would check the content, which is their lovely SAML assertion and thus this would drive their security. If that isn't a great invention for .NET-ers to come up with I don't now.

Such scenarios are a serious threat to J2EE freelancers. Since EJB3, the learning curve has lowered that much, that newbies can learn J2EE within a few months. At least, that's what they thing, while actually they're screwing up projects as hard as they did within .NET, leaving behind a non-positive view over J2EE with their project managers. And this is where I get upset about, since it directly impacts my possible project assignments as freelancer.

Tuesday, November 28, 2006

 

The Harem-problem

Now this is the kind of blogging you can only afford once you got married, and this for pretty obvious reasons. Recently I started to clean up my hard disks and I stumbled on some files I produced when I was some younger. A while ago I was a student at Ghent University. This is where I eventually got my academic degree in computer science. Being a Belgian university we got plenty of maths to train our brain cells during the sparse sober moments as a students. As for the other moments we were, well, some drunk sitting in a bar ('t Kofschip, 't Kapelleke and den Delirium). Nipping our beers every now and then, we had endless discussions on sex, drugs and rock 'n' roll. On a few occasions the discussions shifted towards, how else could it be, mathematics. When sitting in a pub you get to see a nice girl passing by every now and then, which, combined with maths and beer has led to the following problem: How many women does a man need to have to be able to, you know, do it every day of the month, and this with a probablility of let's say 95%. It's a classical student problem which was quickly called the Harem-problem. Although it sounds like a funny problem that found it's creation in a pub, it made up for a pretty big mathematical challenge. The result of which can be found here. So, still 3 to go I'd say.

Saturday, April 22, 2006

 

Could you sign here, please?

What I like very much about web services, is that it's XML. It forces flat-file minded people to flip a switch in their brains to become XML junkies. Converting from flat-file to XML can go from simply putting an XML tag around the flat-file up to a well structured and self-explaining XML document. Besides the fact that a nicely structured XML document is easier to work with, it also brings the additional benefit that project managers find it sexy in their presentations. So that's really the way to go I'd say.

Now during this XML convertion process people might stumble on a thing called digital signatures. Some will take signatures as an argument for sticking with their flat-file while others persist in their XML-ification and inevadibly hit XMLDSig after a while. XML based signatures are a hot topic these day. With XAdES (see DContract for an implementation) even politics is trying to get a grip on it. Unfortunately converting regular signatures to an equivalent XML signature, according to the XMLDSig specification, is not a straightforward process. The big difference with XML signatures is the level of freedom. And, freedom is dangerous as it comes to security. Instead of specifying the signature algorithms in a bullshit big-design-up-front document, one can now specify all signature parameters at runtime within the XML signature element itself. We can choose the signature algorithm, the digest algorithms, we can even specify what to sign via the ds:Reference elements. All this freedom is very cool if we don't forget that in the end it is a machine that has to be able to verify the correctness of the XML signature. Especially verifying what has been signed is a big challenge. Checking the URI attribute of the ds:Reference elements just is not enough. A ds:Reference can contain transformations with the most exotic XPath expression one can ever think of. This opens up a door for XML signature attacks. So it is of crucial importance to also verify what has been signed. For the do-it-yourself creative brains under you, please don't start analysing the transformation expressions. Instead, analyse what the XML signature engine will eventually digest itself. In case you're using the Apache XML Security library, one can retrieve the DOM node set that will be digested by the XML signature algorithm via:

XMLSignatureInput signatureInput = signature.getSignedInfo().getReferencedContentAfterTransformsItem(0);
Set<Node> nodeSet = signatureInput.getNodeSet();

Take some critical DOM nodes out of the original document yourself and verify whether the're present in the digested node set. If not, someone is trying to mess with your system.

Thursday, February 23, 2006

 

The Bootstrap Aspect

One of the advantages of living on the bleeding edge as it comes to EJB3, is that you can come up with solutions other (.NET) developers can only dream of. Such solutions to common design problems not necessarily need to be very complex. We all like to KISS every now and them, right? One such rather simple design problem is bootstrapping an application. It's like when you have a method in your code that is allowed to be executed only once. Most of the time developers go off like this:

void bootstrapApplication(...) {
if (alreadyBeenCalled()) {
throw new IllegalStateException("bootstrapping twice");
}
doBootstrapApplication();
}

Which, OK, works, but, on the other hand, is a very boring way of coding methods. Once you have to provide bootstrap functionality to a number of services within your system, it becomes too much of a copy-and-paste operation. It just smells bad. One way to handle such situation is to aspectize the bootstrap... aspect. The idea is that you would like to end up with something very clean like:

@Bootstrap
void bootstrapApplication(...) {
...
}

where you handle over the bootstrap aspect to some EJB3 default interceptor. The bootstrap interceptor uses a bootstrap service to keep track of whether a method already has been called or not. One nice thing about this design is that you have a central point within your system that can be contacted to query for bootstrap information. Suppose we annotate a method like this:

public static final String A_BOOTSTRAP_ID = "system-x-bootstrap-id";

@Bootstrap(A_BOOTSTRAP_ID)
void bootstrapSystem(...) {
...
}

If, for some reason, we want to check that this part of the application has already been bootstrapped, we simple go to the bootstrap service and ask for it via the given A_BOOTSTRAP_ID. Lovely, isn't it? This is what I like about such designs. The basic idea is simple. The implementation can be done very quickly when you run inside of an EJB3 J2EE container. I implemented this in something like 30 minutes. (BTW: a nice definition of a container: something that manages all kinds of aspects, even the aspect of defining new aspects). And despite its simplicity, it gives you a very powerful and flexible solution to the bootstrap problem.

Tuesday, February 14, 2006

 

Maven2 versus The Build Customization

Along with a recent spike in one of the projects I'm working on, i.e. to verify whether we can run a highly distributed system in JBoss Application Server, I also took the opportunity to go and investigate on the applicability of Maven2 as a production build system. Since I'm already quite experienced at working with Maven1 (also known as Ant on Jelly-based steroids), I thought this wouldn't be that much of an issue. But after a while, I found out why they pushed for a new major version number. Besides the absence of good documentation (at JavaPolis I heard they're writing a Maven2 book, which is probably why there isn't much online documentation), this is really a complete make-over of the Maven1 build system. Compared to Maven1 you'll immediately notice that the repository is much better structured. Really new in Maven2 are the explicit build phases. But then again, this is something you already had to introduce in Maven1 projects when you wanted to use the reactor to perform a multi-project build. Another big improvement is the explicit plugin versioning. This is needed to make your builds completely reproducible.

While Maven1 still allows one to do Ant-isch builds, Maven2 completely breaks with this (not taking the ugly duckling maven-antrun-plugin into account). In the beginning you tend to feel that Maven2 really disallows you to do things your way. With Maven1 you just had to create a maven.xml file to start customizing the build of your artifact. To customize a build with Maven2, you're almost forced to create a new Maven plugin, aka MOJO, if not even to define a complete new artifact packaging. But, when you start thinking of it, do you really want to do a custom build?
I think it's OK that it hurts when you're trying to customize your build, simply because you should not be doing so. F*ck the custom builds. In most cases you can get rid of them.

At Sun they spend a lot of time and money defining various J2EE archives to solve all kind of problems. Some with more success than others. So why not simply settle with them and use a build system that is very good at dealing with standard J2EE component formats? It makes your daily life a lot easier (that is, at least the developer part of it, for the other aspects, a good advice, never use standard components). This is especially true whey you're deploying your application on a standard application server like JBoss AS. (As for some of you with who I've got into discussion on this topic, just for the record, MSSQL server is not an application server, thank you.)

One area where I would like to see some improvements is JMX MBean packaging. On JBoss you have a SAR packaging which works just fine, but it's not really a standard J2EE component yet. I don't think Sun's Glassfish will eat it as is. Not only because of the packaging, but also because of the container specific JMX stuff you put inside your MBeans. It would be nice that the most common JMX services, that a J2EE container offers, were somehow standardized. This would make J2EE applications, that need to tweak the container, more container independent. As for the MBean development itself, what is taking the Sun annotation junkies so long to clean up the meta-data API of the MBean? Same remark for the Maven2 plugin API here. XDoclet-alike annotations feel too much like, well, XDoclet. Hopefully we'll see some of the here above mentioned issues getting addressed by JCPs in the near future.

Once you can agree on building only standard J2EE components, I think you gain great benefit from using build systems like Maven2 compared to Ant-based ... systems. Don't listen to the stories of some people telling you that Maven2 still needs some time to mature. That's just script-kiddie nonsense. It had time to mature via Maven1. The only thing that needs to mature is those people's willingness to adopt new ways of building their components according to the (new EJB3) J2EE standards.

Thursday, February 02, 2006

 

JBoss AS 4.0.4RC1

JBoss AS 4.0.4RC1 is available for download from:

http://prdownloads.sourceforge.net/jboss/jboss-4.0.4RC1-installer.jar?download


It contains the latest EJB3 RC4, which is more in line with the EJB3 spec (or is it the other way around?). This unfortunately also forces you to update your EJB3 projects, since they probably won't run anymore because of the changes in RC4. So, have fun.

Sunday, January 29, 2006

 

The basket design pattern

We all know that some cross-cutting concerns can be nicely tackled via AOP. A school example of this can be found in the ejb3-interceptors-aop.xml file from JBoss Application Server, where you encounter declarations like:

<bind pointcut="execution(* @org.jboss.annotation.security.SecurityDomain->*(..))">
<interceptor-ref name="org.jboss.ejb3.security.RoleBasedAuthorizationInterceptorFactory"/>
</bind>
<interceptor factory="org.jboss.ejb3.security.RoleBasedAuthorizationInterceptorFactory" scope="PER_CLASS"/>

AOP is really great when it comes to aspects like authentication, authorization, transactions, remoting, logging, validation. An aspect is all about creation and execution of some additional code along the path of invocation on an object.
A very important property here is that in general the advice is unaware of the actual state of the object over which it is applied. Most of the time the advice even doesn't care about the type of its cutting objects since it can use annotations on the type to find out what to do.

Unfortunately, there are "data aspects" that require some kind of help of the objects itself to be able to perform its task. A nice example of this is exporting data related to a certain (business) entity. Suppose your application defines person entities. These persons create different data artifacts during their life-cycle. For example reports, invoices. These data artifacts, which are all somehow related to a certain person, are scattered all over the model of your application. How are you going to implement the person export service?

One can go for a big, fat and centralized export component that invokes an export method on all components that hold data related to a certain person for which you want to export data. Thus this export component pulls the data out of the system. Great, it will work. No problem. Except when you add a new service that allows the person to generate additional exportable artifacts. Then you should not forget to also update the centralized export component so that these new artifacts get exported when requested. Who's going to keep track of that? Keeping the centralized export component in sync all the time is difficult. It smells bad. The problem that we have with such a design is no good SoC.

A funny way to solve this problem is by using the, what I call, basket design pattern. In this design pattern different components within a system participate in filling a basket for a certain data aspect like exporting, backup. The basket initially contains just a ticket stating what should go in it. Then this basket is passed along all participating components until it's filled up. It's like shopping. A participating component can put both data and new tickets into the basket. The tickets can be used by other components to generate data and/or yet other new tickets.

The basket is managed by a basket producer. This component runs over the basket fillers until the basket no longer changes content.

boolean changed;
do {
changed = false;
for (BasketFiller basketFiller : basketFillers) {
changed |= basketFiller.fillBasket(basket);
}
} while (changed);

Basket fillers indicate that they want to participate in filling a basket of a certain type by registering themselfs on the basket producer. In JBoss EJB3 this can easily be done via JNDI. A basket filler, which is just a @Stateless bean implementing a BasketFiller interface, registers itself to the basket for exporting via @LocalBinding(jndiBinding = "baskets/export/myname"). The basket producer can lookup the objects via new InitialContext().list("baskets/export") and then start running over the basket fillers. Unit testing the basket producer can easily be done by using shiftone-oocjndi, but beware of the Mock language!

One interesting aspect of the basket is the ticket. This is the medium for the participating components to communicate, even if they don't really know about one another. Thus each basket type defines a set of ticket types that can be used during the basket shopping. As for the data; each participating component can put whatever data into the basket.

A nice feature of this design pattern is that you have the SoC back again; the component generating an artifact is also responsible for the export aspect of it. This implies a decentralized push design. And, you can clearly scope the communication medium for a certain basket type by means of the set of ticket types. Thus it allows you to make a certain "data aspect" manageable again.

Sunday, January 22, 2006

 

Exit from Java

Recently I had to solve the problem for one to be able to uniquely identify the place where an exception was thrown by our Java application, and this without having access to the stack trace. Of course one can do funny things like:

throw new RuntimeException("Call the help desk (unique error code: 12e4ad7f)");

but this feels to much like Cobol, right. I wanted to somehow automate this. Since I didn't find any tool on the net that could possibly help me with this job I wrote one myself. The idea behind this tool is to instrument each:

athrow

byte code with the following sequence:

dup
instanceof the.base.exception.class
ifeq nope
dup
checkcast the.base.exception.class
ldc "my-uuid"
invokevirtual the.base.exception.class.setIdentifier(L/java/lang/String;)V
nope:
athrow
That way, when the application throws an exception, the place where this happens is automatically assigned a unique identifier.

The implementation of the tool uses BCEL for the actual byte code instrumentation of your classes. I was surprised by the architecture of BCEL; nice toolbox this is. Of course I also wrote an Ant task to make the tool easy to use. Since I could use some help/feedback on writing a smarter version of the Ant task, I open sourced the tool under the GPL. The project is called ExId, as in 'Exception Identification', or just as in 'exit'. It's available from:
http://www.frankcornelis.be/exid/

Tuesday, November 22, 2005

 

My First Post

To test or not to test...

http://www.frankcornelis.be

This page is powered by Blogger. Isn't yours?