Friday, February 10, 2012


Trust Models

Given some recent events, it might be interesting to highlight the topic of trust models in the world of service oriented architectures (aka SOA).

While PKI is a key enabler for online business, PKI validation may also give a false sense of security that is becoming more and more prominent as companies and organization make there data available via SOA services. One should seriously reflect on how to implement a trust relationship, especially in system-to-system communications. Before throwing in a full blown PKI validation for the establishment of a system-to-system trust relationship, we should ask ourselves the question why it is again that PKI was invented. All to often you hear the argument that PKI allows you to construct a digital entity that can spam multiple public keys over its lifetime. While this is true for textbook cryptosystems, it is not a really relevant argument in practice. Everyone knows that keys tend to live longer than certificates. And if a key of a server gets stolen in the context of a system-to-system trust relationship, you will certainly inform the system-to-system clients yourself instead of waiting for the revocation data of a trusted third party to propagate throughout the impacted systems.
Of course if the relying party is using PKI validation, he/she might be informed about a key compromise event in an automated way by means of the revocation data made available by the certificate authority. But then again, you're never 100% sure that all your clients have implemented PKI validation correctly.
So when and how do you want to use PKI?

PKI was mainly invented to be able to cope with trust establishment under the following premises:

Under these circumstances it makes sense to use a full blown PKI validation. In this case you should be well aware to restrict the trust anchors as much as possible. So if you can limit the trust anchor to, for example the Belgian Root CA instead of GlobalSign, you should do so. This is why we introduced the concept of trust domains within our eID Trust Service SOA product. If you restrict a trust domain to a limited set of trust points and to a limited set of certificate policy identifiers, you have higher assurance that the distinguished names of all authenticated entities used within your system are indeed just like that: distinguished (at least within your application context). This is also the reason why every member states publishes a trusted list in the context of electronic signatures. Via such trusted lists you can scope down the PKI trust model to your specific application context. For example, for the eSignature Service Directive trusted list the application scope is limited to qualified certificates managed on SSCD tokens. For Belgium you find this trusted list at:
On the other hand, if you take as trust points all trust anchors known within standard web browser systems (just to make sure) you end up in a situation where you indeed trust the whole world. Unfortunately we don't live in a love-and-peace utopia where everyone can hug each other and is willing to pass along the peace pipe. The uniqueness of the names assigned to entities via certificates is one of the weaknesses of the current PKI architecture. In the context of SSL they tried to improve this uniqueness by means of a required domain validation as part of the certificate creation process. The result is that in the context of SSL you can indeed take the full set of WebTrusted certificate authorities and are still able to operate with a certain assurance that the other end of the line is indeed who you think it is. At least, that's the idea behind it.

When it comes to the establishment of system-to-system trust relationships, using a full blown PKI validation might not be required. In practice it might be even not desired, as it will most likely trigger more insecure situations where the trust relationships are not tightly enough defined. Let me explain this. The most simple trust model between two entities is based on the fingerprint of the public key (or certificates if you want to) of each entity. Such a scheme always works. If you look at some recent systems like Windows Identity Foundation you might have noticed that even the big guys like to default to such 'primitive' trust models. Why? Simple: developers can hardly screw up a fingerprint check. As long as they foresee a fingerprint rollover mechanism where two fingerprint can be configured (to be able to cope with service certificate renewal events), this model works great. The only downside to this strategy is that you need to follow up the certificate renewal events per relying party application yourself.
PKI validation on the other hand might be theoretically very sound, but given the capability (or uncapability) of most developers, you just might think twice before mentioning 'PKI' to your development team. This was one of the drives behind the jTrust project. Using a full blown PKI validation in system-to-system trust establishment might lead to situations where the developers take the default (Java) PKI validation configuration and blindly accepts what the default PKI validation engine tells him. What is often forgotten is that you need to take the application context into account. The OWASP WebScarab authentication protocol plugins that I've developed come with some security tests that did demonstrate such lax configurations successfully. So believe me, in reality this happens all too often unfortunately.

Only in the context of SSL you can somehow trust the default PKI validation results of your platform. If you look at for example OpenID, you'll notice that the entire security depends on the trust in the SSL PKI validation which takes place during the OpenID association step. Hence the importance to run the OpenID association over SSL. The Diffie-Hellman option was an unfortunate decision as again, it gives a false sense of security. If you're not 100% comfortable with this approach, you can also further restrict the set of trusted entities during the OpenID association step. That's the reason why we patched the OpenID4Java implementation to be able to define your own SSL trust manager. This patch has been merged with the official source code tree and is now part of the 0.9.6 release.

Translating the same strategy as we see with OpenID to other authentication protocols like for example WS-Federation is an interesting exercise. To base your trust on SSL in the case of the WS-Federation web passive authentication protocol requires something similar to the OpenID association step. If you want to follow the WS-* specifications you quickly end up with a WS-Trust STS validation service that allows you to validate SAML tokens that have been acquired via a WS-Federation web passive protocol run. The latest version of the eID IdP product contains such an STS validation service. Here the relying party can submit the SAML token to the STS as follows:
<soap12:Envelope xmlns:soap12="">
      <saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"
        ... The SAML2 assertion received via WS-Federation web passive protocol...

Via the <wsp:AppliesTo> element the relying party can even further restrict the application context in which the SAML assertion should be considered as being valid.
The eID IdP STS validation service can now answer to the relying party as follows:
<soap12:Envelope xmlns:soap12="">
      <wsu:Timestamp xmlns:wsu="">
    <trust:RequestSecurityTokenResponseCollection xmlns:trust="">


The STS client can now again default to the standard SSL engine for the implementation of its trust model. Adding such STS validation service to the eID IdP also opens the door to seamless transformation from a web passive scenario (i.e. web browser) to an active scenario (i.e. web services). This feature is key to modern IAM architectures.

So let's summarize the different possible (non-exclusive) trust models that were mentioned:

And interesting variant is one where you combine both fingerprint validation and PKI validation. So you have a very strict set of trusted end-points, and you benefit from automated revocation propagation. Of course you still have to manage the fingerprint rollover.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?