Oops... Missed One: From XML Gateways to Service Gateways

I recently noticed this great article on XML appliances published in March 2010. The thing is, I didn't see any mention of Intel(R) SOA Expressway, which is the modern incarnation of the hardware XML gateway brought to market by Sarvega as early as 2000. At Intel, we call this product a service gateway, which can be thought of as a higher-performing, more flexible gateway that more closely matches current performance, extensibility, and data-center trends compared to its earlier hardware only cousins.

I think the author did a great job of mentioning some of the salient points regarding XML gateways, such as the need to push policy enforcement to the edge to simplify coded-in security (decoupling) and the need to provide XML acceleration for complex XML tasks such as transformation, validation, and XML security. He also alluded to some current limitations found in the IBM DataPower appliances:

But as powerful as IBM's XML appliance is, there is always room for improvement. One area where Iocola said the devices have trouble is handling large messages. To remain efficient, he said, the appliances need to offload messages approaching 2 GB to other components.
 
I also like the author's distinction between an ESB and an XML gateway, as this is often a point of confusion. Specifically, the author mentions that a gateway "doesn't host services", and this is true for traditional gateways such as IBM DataPower, but it doesn't have to be the case with a service gateway like Intel(R) SOA Expressway. I would also add that a big distinction between an ESB and gateway is that the ESB doesn't provide edge security protection or high-performance XML processing. For instance, ESBs typically don't have denial of service protection, content scanning, or message throttling. These tasks are more closely aligned with an edge security product such as an XML gateway.

So, what is the difference between a traditional XML gateway and a service gateway? Let's summarize a few points as follows:

  • It must support a high performance virtual form factor. This means that performance of the XML gateway cannot require any customized hardware, such as special XML boards or chips.
  • It must be extensible for new business logic and security processing. Enterprises cannot wait for hardware refresh for the vendor to add custom processing
  • It must support all styles of services, whether based on REST, SOAP or even custom proprietary services based on custom protocols.
  • It must scale XML processing on cheap commodity hardware. In this case, Intel(R) Multi-Core servers come to mind but AMD is also an option
  • It must not require any specialized coding knowledge, such as deep XSL knowledge, extension functions or an army of developers. After all, if it did you would just invest in writing more code
  • It must support non-XML data. While we can all hope that all companies will move their data-sets to 100% XML, it's just not a reality we can count on.
  • It must support a wide ecosystem of middleware and security vendors for interoperability and integration allowing for a best-of-breed application.
  • It must offer a physically secure form factor running on a well-known operating system with audit able patch levels rather than a custom appliance O/S subject to security by obscurity
Another way to think about a service gateway is that the product category is trying to bring more value to the security architect and developer in the data-center. If you are currently using an XML gateway and notice that many of these requirements aren't being met, maybe it's time to look at a service gateway such as Intel(R) SOA Expressway?

Posted by Blake Dournaee on 12:08 PM 3 comments

Separation of Concerns: Why Service Gateways are even better than they appear

I've spent the last two weeks traveling around to two interesting conferences. One was Microsoft TechEd, where I gave an interactive session on Intel(R) SOA Expressway and the other was JBoss World, where I got a chance to expose the product to a number of JBoss developers and system administrators.

At each of these conferences, I expected to see more of a homogeneous crowd. That is, one would expect a mostly .NET crowd at the Microsoft conference and mostly an open-source or Java crowd at the JBoss conference, and while this is generally the case, developers and architects seem to have grown a much higher tolerance for alternative languages and technology stacks. Issues of “religion” toward a single vendor or technology appear to be fading somewhat. I think this is partly due to the amount of inorganic growth that companies are experiencing, mostly from buying up other companies and having to integrate their middleware stacks.

In these inorganic growth scenarios we have a security architect or developer faced with multiple applications written with products from many different vendors. Most of the time this is a security nightmare scenario: You may have a development team well-versed in coding in security for one language, say Java, and now have to replicate that effort on a completely different middleware stack, say C# or PHP. Worse, as inorganic growth continues, it’s like rolling the “middleware dice” to find out what new technology stack will appear on the scene. In this scenario, you can only scale the security of your application as fast as you can train your security architect to be adept in the “in’s and “out’s” of the particularities of each vendor product – and this is not a risk adverse strategy for any company.

If we step back and look at the problem, the XML gateway such as Intel® SOA Expressway offers an elegant solution. It is the only conceptual model whose success turns on simplifying the security infrastructure by removing coded-in security. What? Yes, that is correct. To use the proxy model for security successfully you have do one thing: turn off the security processing in your middleware stack and force your developers to become application developers and not security architects.

Does this sound backwards? Coded in security is hard to manage, maintain, monitor, audit and change. You are tightly coupled to the subject-matter expert that wrote the code and that person may have left the team after the “inorganic growth” event that caused you to have to deal with this new application. Do you want to find out which version of WS-Security you are really using? Check the code. Want to find out if you are processing X.509 certificates with CRL processing turned on? Check the code. Are we accepting signed requests? Check the code. Are we protecting against SQL injection attacks or performing type validation on the inputs to the application? Check the code…

As you can see, this strategy is painful, increases the complexity of the overall system, and makes security a hard problem. The truth is, the proxy model for web services, SOA and XML processing has been around now for about 10 years and its value has increased as companies have multiplied the complexity of their applications. The basic idea is to centralize generic policy enforcement in a single place, for both threat protection and trust enablement. The model works beautifully – developers write pure services, devoid of any security logic, and a single policy is pushed to the gateway where it can be easily maintained, monitored, audited and changed using configuration, not coding. This means that in practice, there is a trust relation between each of the services and the gateway and the one-time effort on the back-end applications or services is to go through the code once and remove security checks. Your developers can be free again to focus on business logic and the security architect can focus their attention on the gateway itself.

Here is a picture of the conceptual architecture:


Here we have the gateway acting as a policy enforcement point. The key idea here is that all security processing can be centralized. Threats are stopped at the edge, trust is maintained through a combination of message level security (encryption and digital signatures), session security (such as SSL), and authentication, authorization and auditing, which is done by calling out to existing identity management investments, such as CA Siteminder, Oracle Access Manager, IBM Tivoli Access Manager, LDAP, Active Directory, ADFSv2 and others. Once a trust relationship is established between the service endpoints and the gateway, the services themselves can be as pure as possible - devoid of security processing other than identity context, which can be provided by the gateway. Developers can finally be free of having to worry about security.

For those of us who have been in this space for awhile, this picture may elicit an obvious "so what" response, but I think it is a very powerful model for security. In fact, enterprises can approach this model without an actual gateway if they can manage to centralize security processing for a class of services. The real beauty of a model like this is that it specifically requires the simplification of coded-in security. How many other security models can make this claim?

Posted by Blake Dournaee on 10:46 AM 3 comments

Active Directory Federation Services v2.0 - A Good Start

It looks like Microsoft has released their long-awaited Active Directory Federation Services v2.0 ('ADFS v2.0') component for Active Directory.

Simultaneously with this release, Microsoft is pushing the concept of "claims based identity" as the new thought "superstructure" that according to Microsoft, is a seminal event in the history of thought for identity management.

Here is my favorite quote from Microsoft's book "A Guide to Claims-based identity management".

"The claims-based model embraces and subsumes the capabilities
of all the systems that have existed to date, but it also allows
many new things to be accomplished."


I must say this is quite a claim.

Let's step back and take a look at what ADFSv2 is actually doing on the wire, which is where the truth ultimately lies.

The model proposed by Microsoft equivalent to the assertion model of identity. In all cases, whether it is a web service, web site or SaaS application, the user authenticates himself or herself to ADFSv2 for a specific application and then receives a SAML assertion for that specific application which they then take to the target application in a browser or "smart client" (web service).

Microsoft is trying to elevate it's technology to greater philosophical importance by using the word "claim" in place of "attribute" or "role" or "property" of a user. This makes for some good marketing, but the SAML assertion that comes from ADFSv2 will have very specific attributes in it targeted for very specific target application. This is an important point because it means you can't simply take the SAML assertion and "federate it" anywhere as it will have "claims" (e.g. attributes) designed for a specific application, some of which might be quite sensitive.

For instance, a claim for an expense reporting application might have an employee's cost center as an attribute in the SAML assertion. This is great for federating the assertion to the expense reporting application, but not so great when federating the assertion to other applications, such as a partner website.

I looked around for a sample SAML assertion generated by ADFSv2 with some sample claims to dig in to the details, but they are hard to come by. I wonder why...

Notwithstanding my criticism, the model is a good one because moves the authentication and authorization logic further away from applications and into a centralized, trusted assertion. This is a step in the right direction and follows the trend of security decoupling that has proven to be useful for other technologies such as SSL and web services security in the past.

To conclude this post, I am leaving two unanswered thought provoking questions:

#1: What is the difference between a claim and an attribute? (Credit for the seed of this idea goes to Dr. Babak Sadighi of Axiomatics during a conversation we had at the recent Kuppinger show)

#2: Is there a need for "claims filtering/protection" - e.g. on-the-wire gateway functionality that can obfuscate, encrypt or delete sensitive claims in ADFSv2 issued SAML assertions when these assertions leave the Enterprise perimeter?

Posted by Blake Dournaee on 5:04 PM 2 comments

Truth Denied?

A colleague forwarded me this link from Lustratus Research. Incredibly, the analyst makes the following claim:

"I say “appliances” in inverted commas because Intel’s product is wonderfully described as a software “appliance”. Surely the award for the most spin in a product category goes to Intel."


I was a bit taken aback by this. I hope the analyst was being provocative on purpose. I wrote the following reply (which was subsequently deleted), as a clarification:

"Hello There – It seems that this is a very provocative report, especially with respect to the statements made regarding the Intel product.
First, off I have to say that I am from Intel, so as you must, please take my comment with a grain of salt.

I hope, however, that the analyst does not confuse and equivocate a nuanced product available in multiple form factors for different usage models with “marketing spin.” The facts speak differently in this case.

In fact, the Intel(R) SOA Expressway product (like some of its competitors) is available in three form factors (hardware, software and virtual image) – each of which can be properly called an appliance.

“Appliance” here does not necessarily reflect a strict category of hardware only, but instead set of management and monitoring capabilities such as a real-time dashboard, self-healing capabilities, alarms, alerts, management clustering, and high availability with a familiar web-based interface and easy management.

It is these capabilities that primarily characterize a software appliance. In this case we can think of a hardware appliance and then subtract out the physical security features. It is only natural that we can take this same form factor and package it for a virtual machine and we will arrive a similar form factor designed for a virtual private cloud. Incidentally, this is something especially difficult for a product only available as a pure hardware appliance.

Finally, because the Intel product relies primarily on a software layer that performs machine language processing of XML, the addition of hardware adds only physical security prowess, and is not a necessary form factor for a high performance deployment. All in all, the product is truly available in all three form factors – no spin required. Perhaps some of these facts can “spin” the customer closer to the truth about this particular product.

Blake Dournaee"


Is truth denied? I just wanted to make sure that our customers know that our appliance form factors (software, hardware and vm) are not elements of spin!

Blake

Posted by Blake Dournaee on 9:11 AM 1 comments

Really Understanding the SSL/TLS Vulnerability (Part 1)

This is a two part blog post. In the first part I will try to explain the vulnerability so we can get a better handle on it, and in the second part we'll examine possible countermeasures and mitigation strategies.

There was some big news in the security world on November 4, 2009 when Marsh Ray released details about a newly published SSL/TLS vulnerability. Of course, selling security is all about creating Fear, Uncertainty and Doubt (FUD) and as such a number of websites and blogs also picked up the story. Most notably:

The real question is whether you should be worried or not. I think that in order to answer this question we need to really dig into the details of how the attack works and then analyze the risk and potential mitigation factors from there. This particular attack is called a chosen plaintext attack. A successful attack will allow the attacker to do two things:

(1)   Execute a chosen HTTP transaction on the server. This could be any HTTP request that eventually does something important on the server side. For instance, a bank account transfer comes to mind as well as single message transactions such data insertion, but it could be any server-side action triggered by a specifically chosen HTTP request.

(2)   Gain information regarding the shared symmetric key used in an SSL session. (for a limited time). If the attacker knows a specific HTTP transaction produces a given plaintext, or plaintext sequence, he or she might be able to choose which plaintext blocks to encrypt and obtain the matching results. This may yield some information about the key. In this case, the attacker might accumulate many examples of the plaintext and matching cipher-text. As we will see, a sophisticated
attacker acting as a man-in-the-middle (MITM) will be able to inject plaintext and then have access to the corresponding ciphertext, and if he or she knows any information about the generated response, there may be a stretch of time (before the next renegotiation) where a chosen-plaintext attack is feasible.

In order to understand how this man in the middle attack (MITM) works, we have to first understand what happens inside an SSL handshake. SSL works in two broad phases: First, the handshake phase, and second, the application data phase. The handshake phase is where the shared, symmetric key is computed which is subsequently used in the application data phase of the protocol to encrypt application data traffic. The goal of SSL is to provide secure socket communication between two endpoints. In this respect, SSL is really a layer 4 protocol as it is communicating over standard sockets. I think we take SSL for granted due to its ubiquity, but the handshake is actually performing a cryptographic feat:  a shared symmetric key is securely generated between two parties that have never met before.

The protocol itself requires no initial shared secret information, only a set of trusted certificate-authority (CA) certificates on the client side (at a minimum). We also take these for granted because the certificate-authority vendors have placed "trusted" CA certificates in our browsers. As an exercise you can check your browser to see who you implicitly trust. In Firexfox 3.5.x this is under Tools > Options > Advanced > Encryption tab > View Certificates. In the latest Firefox I count 70 CA certificates (roughly).  Moreover, the protocol supports client authentication as well, which requires CA certificates be provisioned on the server similar to what we see in the browser.

Certificates are important in two respects for SSL. First and foremost they contain the public key used by the client to encrypt the pre-master secret during the handshake, and second, they are used to validate one or both sides of the communication using X.509 certificate path validation. Certificate path validation is the process of validating the trust on a certificate by checking that you trust the issuer of the certificate. This is done by looking at the issuer's issuer and so on, up the chain until you reach an implicitly trust CA certificate, called a root CA.

The limitation that Marsh Ray uncovered has to do with renegotiation of the shared secret, which I will call the renegotiation gap. SSL allows for either side to renegotiate the master secret at any time by sending an appropriate message, either from the client or server. Renegotiation can be done for any number of reasons – to refresh the shared secret, to change the cipher, or to change the mode of the protocol to a different form. It can also be done for no good specific reason, simply by having the client send a new client hello. Usually, however, renegotiation is a move from one-way SSL (server authentication) to two-way SSL (client authentication).The first thing we will do in order to understand the renegotiation gap is dig into the handshake.

The SSL Handshake

To see the problem, you have to put yourself in the mind of the attacker and treat the handshake like an information game with state changes along the way. The goal of the game depends on the mode of the protocol, and for SSL this will either be a shared secret with server-side authentication, or a shared secret with both client and server authentication. In the case of server side authentication, the end goal is to generate a shared secret key where the client knows the identity of the server based on the limitations of PKI technology (certificate validation). In the case of client authentication, the goal of the game is to generate a shared secret where both sides know the identity of each other, again based on the limitations of PKI technology. Once you understand the additional information state at each stage of the handshake, it becomes trivial to see how a man-in-the-middle attack can work, and further, how the renegotiation gap changes the rules of the game.

First, let's lay out the handshake in 13 possible steps as follows. The table below is meant to be read from left to right and the ">" arrow denotes a message flowing from the client to the server. Similarly, the "<" indicator denotes a message from the server to the client. Once the handshake is finished, the game is over with one of two "normal" outcomes: (1) A shared secret was generated and the server was authenticated or (2) The shared secret was generated and both sides were authenticated. A critical point here is that a man in the middle of the SSL handshake changes the rules of the game and the information state.

In the table below note that "*" is used to denote optional messages in the SSL handshake, used in the case of client authentication. This implies that the shortest possible handshake (for server side authentication) is an exchange of 8 messages and the full handshake is 12 messages. It should be noted for completeness that the Server Key Exchange is only used when the certificate doesn't contain a public key, such as in the case of DSA. Also, for completeness, there is a mode for SSL called "anonymous Diffie-Hellman" which requires neither side authenticate, but we won't cover it since it is infrequently used.

Handshake Phase

 Step

Client Message

Server Message

Information State

1

Client Hello >

  

 The first handshake message that contains the protocol version, random bytes, a session ID (or null), supported cipher-suites, and optional supported compression methods

2

  

< Server Hello

 The second message contains a selected cipher-suite, the highest mode of SSL supported by the server, session ID and compression methods

3

  

< Server Certificate

 ASN.1 encoded certificate or certificate chain goes back to the client.

4

  

< Server Key Exchange*

 (Skipped for this discussion but included for completeness)

5

  

< Certificate Request*

 Contains acceptable certificate types and list of acceptable CAs

6

  

< Server Hello Done

 Indicates the server is finished with its side of the handshake

7

Client Certificate* >

  

 The client sends its certificate or certificate chain

8

Client Key Exchange >

  

 The pre-master secret is encrypted for the server using the public key in the server's certificate

9

Certificate Verify* >

  

 Client-generated signature over the master  previous handshake messages

10

Change Cipher Spec >

  

 Signal message with version and a single byte

11

Finished >

  

 Two hashes of the handshake messages, master secret, identifier and padding

12

  

< Change Cipher Spec

 Signal message with a version and a single byte

13

  

< Finished

 Two hashes of the handshake messages, master secret, identifier and padding

  
 

  • Step 1: There is no significant shared information state yet. The client has asked to start a new SSL session with some basic parameters. One important point here is that if the client wishes to resume a session, it must include the session ID.
  • Step 2: There is still no significant shared state – the server responds with a chosen cipher-suite, but the client still has not authenticated the server
  • Step 3: The server sends its certificate or certificate chain to the client. It might be tempting to think that at this point the client can authenticate the server, but this is simply not the case. The reason why is because certificates are public by design, so thinking like an attacker here means that the server could (at this point) have any X.509 certificate. The only thing that guarantees authentication in the PKI model is the existence of the private key, which the server doesn't actually prove until it sends its finished message in step 13. In other words, the client can perform certificate path validation on this certificate and achieve a valid result, but has no guarantee that the server owns the private key corresponding to this certificate. One possible (non-cryptographic) way of adding some measure of authentication here is to verify the hostname against any hostname declared in the certificate's common name, but this is a de-facto practice and not part of the actual protocol specification. It also doesn't work if the man in the middle is actively installed below layer 4 in the network infrastructure.
  • Step 4: We will skip for our purposes here as it pertains to certificates that do not contain a usable public key
  • Step 5: The server optionally requests the client's certificate. This message contains the acceptable certificate types and Certificate Authority Certificates (CAs), which are trusted, root certificates.
  • Step 6: The server indicates that it is finished with its side of the handshake. Even if the client verifies the server's certificate, it still does not have any guarantee that the server is who it says it is.
  • Step 7: Even if we assume the client has sent its certificate, from an information state point of view, the server has no guarantee yet that the client is authenticated, even by validating the certificate. The reason is the same as in step 3 or 6 for the server; the client has not yet demonstrated proof of possession of the private key that matches the certificate.
  • Step 8: The client generates a 48-byte pre-master secret.  This value is encrypted using the server's public key and sent to the server.  The pre-master secret is a two byte client protocol version and 46 bytes of random data.  The client version helps prevent version roll-back attacks. It should be emphasized that until the server proves it can decrypt the secret there is no evidence that the server is authenticated. So even after step 8, the server has proven nothing to the client from a cryptographic standpoint.
  • Step 9: The client finally generates proof of possession of the private key. It does this by digitally signing a hash of previous handshake messages. This further implies that in terms of the information game, in the case of two-way SSL, the server is the first to know the true identity of the client.
  • Steps 10 and 11: The change cipher spec message is sent as a signal that we are about to begin the application data phase. Immediately following this the client sends its finished message, which contains two hashes (MD5 and SHA-1) of the master secret, handshake messages, string identifier and padding. From an information state perspective, the server is still not authenticated.
  • Steps 11 and 12: The server is finally forced to prove that it can decrypt the encrypted secret sent in step 8. Here the server must be able to compute a signature over the secret and the client must verify that this is the case.

With this background, we can now insert the attacker into the flow to see how the attack works. It is easy to think of SSL as an "encrypted" channel safe from an intruder, but if there is a man in the middle actively intercepting packets from the inception of the protocol, he or she can exploit the renegotiation gap.

 The Man-in-the Middle is actively waiting between the client and server under the following preconditions:

  • The MITM is waiting from the inception of the handshake phase and assumes that the SSL protocol begins with server-side only authentication
  • The MITM waits for a client hello that he or she knows will trigger server-side authentication
  • The MITM goes through the full handshake with the server and because the server is not authenticating the client, the server does not know it is talking to a man-in-the-middle
  • Eventually, some trigger occurs that causes the server to ask for a client certificate. I will call this the renegotiation trigger message (RTM). This trigger can be a renegotiation of the SSL handshake, triggered by either the client or the server, for any reason. I received a blog comment regarding this point (thanks Steve) and one point made here is that the renegotiation may be triggered simply by sending a new client hello at any time. This is a critical point - the SSL specification allows for a client hello to be sent at any time and doing so will trigger negotiation. This means that the RTM can just be a client hello with no change to the cipher-suite or strenght of the SSL/TLS session
  • The MITM is not passing through each message as they flow, but collecting and saving up the messages he or she wants

 
Let's look at how the man in the middle can intercept messages during the handshake. In the following table I've highlighted certain portions of the handshake important to making the attack to work.

 Handshake Phase with Man-In-The-Middle (MITM)

 Step

Client Message

Man in the Middle

Server Message

Information State

1

Client Hello >

Client Hello >

  

The server receives a client hello as normal; the MITM caches the initial client hello and starts a new session with the server, sending a new client hello

2

  

MITM caches the server hello without passing it on.

< Server Hello

 The server responds with a server hello. The client is still waiting for this message

3

  

 MITM caches the server certificate.

< Server Certificate

 The client is still waiting for a server certificate

4

  

  

< Server Key Exchange *

 (Skipped) – Does not affect this case

5

  

  

< Certificate Request *

 (Skipped) – The MITM must start with a handshake that does not trigger a certificate request

6

  

  

< Server Hello Done

 Server sends the server hello done.

7

  

Client Certificate* >

  

(Skipped) – The MITM has chosen a handshake with no client certificate request

8

  

Client Key Exchange >

  

The pre-master secret is encrypted for the server using the public key in the server's certificate

9

  

Certificate Verify* >

  

(Skipped) – The MITM has chosen a handshake with no client certificate request

10

  

Change Cipher Spec >

  

Signal message with version and a single byte

11

  

Finished >

  

Two hashes of the handshake messages, master secret, identifier and padding (MITM)

12

  

  

< Change Cipher Spec

Signal message with version and a single byte

13

  

  

< Finished

Two hashes of the handshake messages, master secret, identifier and padding (Server)

14

  

Renegotiation Trigger Message (RTM)  > (This can be as simple as a new client hello)

  

  

15

  

  

< Hello Request

The server sends a hello request based on the trigger message (RTM), which starts a new handshake

17

  

Replayed Client Hello from Step 1 >

  

  

18

  

< Server Hello

< Server Hello

The server responds with a certificate. This matches the response the client expects after Step 1.

19

  

< Server Certificate

< Server Certificate

The server responds with its certificate which is passed on by the MITM

20

  

< Certificate Request

< Certificate Request

The server requests the client certificate – the RTM in this case triggers a stronger form of SSL

21

  

< Server Hello Done

< Server Hello Done

Server sends the server hello done which is passed through to the client

23

Client Certificate >

 Client Certificate >

  

 The client sends its certificate

24

Client Key Exchange >

 Client Key Exchange >

  

MITM passes along the client key exchange containing encrypted the shared secret

25

Certificate Verify >

 Certificate Verify >

  

MITM passes along the client's proof of possession message

26

Change Cipher Spec >

 Change Cipher Spec >

  

 Signal message with version and a single byte

27

Finished >

 Finished >

  

 MITM passes along the finished message from the client

28

  

 < Change Cipher Spec

< Change Cipher Spec

 Signal message with version and a single byte

29

  

 < Finished

< Finished

 MITM passes along the finished message from the server

30



< HTTP Response

< HTTP Response

The server retroactively applies the authentication to the message sent in step 14, allowing the transaction through

The MITM – Handshake Explanation

The Man-in-the-Middle (MITM) is present from the inception of the first client hello and acts in such a way as to make both the client and server believe the handshake is legitimate, while at the same time executing a chosen plaintext attack. I have highlighted the important parts of the attack in the previous table.

  • Step 1: The client sends its initial client hello and all it eventually sees is a server hello in step 18. The MITM works the attack between Step 1 and Step 18, acting as if he is the client.
  • Step 14: This is where the MITM must choose an appropriate renegotiation trigger message (RTM). This message must be a legitimate HTTP request that triggers the server side to start a new SSL session. Two ways of doing this are to choose an HTTP request that triggers either a stronger cipher-suite or a client certificate. In the example here, we assume the RTM triggers a renegotiation for a client certificate. In practice this would be done using an HTTP request to a "protected" or "higher security' URL location on the server. It should be noted that the attack can also be made to work if the client triggers a renegotiation, though this isn't shown here.
  • Step 18: Once the server hello is passed back to the client, the attack is nearly complete. From here on out the server will look to the authenticity of the true client, with the MITM as a silent go-between. The server will retroactively allow the message in Step 14 based on the future authentication state from the true client. It is this specific behavior that results in the compromise.
  • Steps 23-25: The client sends its valid certificate along with the encrypted master secret and proof of possession of the private key. It should be noted that the MITM cannot snoop on the communication at this point, but will have access to the encrypted response and knows at least part of the matching plaintext, which was part of the message in step 14. This is the second weakness that may result in information that allows the MITM to derive the key, but this is still quite difficult.
  • Step 30: This is the confirmation of the content of the renegotiation trigger message (RTM) in step 14. The MITM has successfully executed a one-way HTTP transaction using the authenticity of the client. I say this is one-way because they cannot read the actual response.

 Another way of seeing how the attack can work is consider the behavior from the client

  • Client's Perspective: The client starts an SSL session by making an HTTP request to a web server and is prompted for a client certificate. The client presents their certificate. They notice that the first HTTP response (web server communication) they get back doesn't match their initial request. They shrug it off and send the request again, this time getting a valid result. They may also be surprised to find that they are asked for a client certificate when in the past they weren't, but it doesn't matter as the client would just assume security policies have increased on the server side. The client may notice the handshake took a little longer.
  • Server's Perspective: The server receives a request for a new SSL session from a particular client IP address. The server processes the handshake normally, authenticating itself. The client subsequently asks for a URL that is protected with a higher level of security. The server caches the HTTP request for this protected URL and renegotiates the SSL session with the client, this time asking for the client to authenticate itself. The client authenticates itself and the server sends the resulting URL location (or response) back to the client.

Conclusion

As you can see, there is very little to indicate a compromise on either the client or server side. The key weakness is the SSL renegotiation, which can be triggered by the client or server. Worse, renegotiation is valid at any point during the protocol communication and may be done simply for the purposes of refreshing the key. In principle the weakness can be exploited anywhere the renegotiation occurs assuming the MITM is watching for this state from the inception of the handshake. In the next blog post we will look at the current and proposed mitigation strategies for this attack.

 

Posted by Blake Dournaee on 11:43 AM 7 comments

Intel and Oracle at Oracle Open World 2009

Hello All -

I just wanted to send out a little note that I'll be at Oracle Open World next week at Moscone Center in San Francisco on October 12th, 13th and 14th.

We've got a demonstration setup that uses Oracle(R) SOA Suite 11G and Intel(R) SOA Expressway. The demo shows how you can deploy SOA Expressway as an edge security gateway to offload security processing and provide threat protection for application level attacks. We'll have SOA Expressway and Oracle SOA Suite running side-by-side on some monster laptops running on 64-bit Linux. We also plan to have a demo of Oracle Entitlements Server (OES) to demonstrate how authorization decisions can be pushed to the network edge.

You can visit the Intel software website for more information on Intel(R) SOA Expressway and Oracle's website for more information Oracle(R) SOA Suite.

Posted by Blake Dournaee on 6:13 PM 2 comments

Stating the obvious on XML Attacks

It looks like everything old is new again with XML Attacks...

I came across this article in the Washington Post. They use the term "XML fuzzing" to describe really just 50% of the XML threat equation - something I have always called coercive parsing, which is the manipulation of the XML document structure.

This, however, is only half of the battle. XML threats can also be semantic meaning the attack is modifying the structure of the XML document to force a down-stream system to execute a particular function. This is the other element of XML threats that is left out of the discussion. Semantic threats cover areas where the XML document is executed in some way, such as SQL injection, embedded JavaScript, or other embedded languages like XPath.

All in all, it is a cat and mouse game where the most important feature is extensibility and the ability to deploy new yet unnamed threats in real-time using a generalized mechanism such as regular expressions. All of these features, protection from structural threats, semantics threats, and threat extensibility can be found in Intel's SOA Expressway.

Posted by Blake Dournaee on 8:20 AM 5 comments

Followers

About Me

My photo
I have been working in the XML/SOA and security space for about 10 years. I currently work at Intel Corporation in their software group. I wrote the first book on XML Security and am a co-author of SOA Demystified from Intel Press. My interests are an eclectic mix of computing, security, business, technology and philosophy