Age | Commit message (Collapse) | Author |
|
Some patches for MAM had gone into swiften without being ported to stroke. This patch should bring stroke
update to date with Swiften.
The swiften patches in question are
9b762e1cf26cfe12cf601d9ea95cf91b3f95c799 -- Add node attribute to MAMQuery
8096f80861667381b777af774cfd446d6fc8cda8 -- Brining XEP-0313 (MAM) implementation in line with version 3.0.
Test-information:
Ran the updated JUnit tests in Eclipse they all passed ok.
Ran make and make test in a stroke checkout. Everything build ok and the JUNit tests passed.
Change-Id: I95bf5d598808f48fe2d7af12c0f07d852d68c115
|
|
Changes to catch up with Swiften changes to FormField in commit 00284e5,
also adds <reported/> and <item/> elements, added to Swiften in commit 83afa3d.
Changes include refactoring of the FormField class, changes to Form parser
and serializer classes and updates to JUnit tests.
Test-information:
Tested using updated JUnit tests, all tests complete successfully.
Change-Id: Ic91ad4a11a335fb3d2b2a2c4a1865f836e2af70b
Reviewer: Alex Clayton <alex.clayton@isode.com>
Reviewer: Gurmeen Bindra <gurmeen.bindra@isode.com>
|
|
The JavaConnection code which reads from a socket detects a socket
closure and emits a disconnected signal.
It was noticed that on some occasions, data was arriving on the socket
just before it was closed, and this data was never passed to the
application.
This happens when the server writes e.g. a "BYE" message and closes
the socket straight away: when JavaConnection is woken to read the
message, it does so and then goes on to notice that the connection has
been closed and throws an IOException without passing the message back
to the application.
This patch fixes the problem by making sure that any data read prior
to the close being noticed is sent to the application before the closed
signal is emitted
Test-information:
It was possible to provoke the problem by deliberately breaking socket
connections - if you do this often enough you see cases where data
read from the socket is lost.
After this patch, such cases do not result in data loss.
Also tested with email client and verified that connections to
icloud.com which previously had provoked this problem when
authentication failed now seem to return all data reliably.
Change-Id: Ieba0f4186b7c91e55f5f1a4b3b64bc923006b933
|
|
The java code was never emitting the onDataWritten signal, although
the corresponding C++ code in Swiften does do this.
This change causes the signal to be emitted whenever a data is
successfully written to the socket.
Test-information:
Tested using an application which was registering for the signal;
previously it never saw "onDataWritten"; now it does.
Tested using an application which doesn't register for the signal; it
works as before.
Change-Id: I1399af0721ef8226c0c4d2420bbe23f53ad3494f
|
|
The POODLE vulnerability means that using SSLv3 is insecure. So this
change removes it from the list of protocols that JSSEContext may use.
Oracle's "Java Cryptography Architecture Standard Algorithm
Name Documentation"
http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html
Lists the "standard names" that can be used in this context:
SSLv2
SSLv3
TLSv1
TLSv1.1
TLSv1.2
SSLv2Hello
After this patch, only the three "TLS" protocols will be allowed.
Test-information:
Tested using JRE6 and JRE7; viewing the SSL handshake indicates that
the protocol being requested is being used when the handshake occurs
Change-Id: I99710a72a4b8567226b1205fdf64c6c67ccc2a9a
|
|
Add a SubjectSerializer to Stroke.
Test-information:
Created a Message object and set its subject. Subject field now turns up in the XMMP telemetry.
Change-Id: I7b310d6dc52852e5704696e5e3762bed6a4d53ad
|
|
This patch updates Stroke as per the Swiften code to get peerCertificate chain.
Test-information:
tested using M-Link Console (XMPP client) to look at the certificate and chain
Change-Id: I2662511b72f9ca6d176a9f4c1e02d10b5df5d2c7
|
|
Until now, Stroke would not do trust anchor checking because there was
no suitable way to getting to a default trust store.
This patch makes stroke use JDK's default trust store for looking up
trust anchors. If it can find the trust anchor in JDK's store, it
proceeds to do validy check. If any check fails, an error is set
and it is upto the client to decide if client is happy with certificate.
Test-information:
I tested with with an XMPP client MLC.
I got prompted with cert for server whose CA was not in Java Trust Store.
After adding the CA to JDK trust store, no prompt was seen
I then renewed the certificte with validity = 2 minutes.
On doing a connection, MLC prompted me because the certificate was expired
even though the CA was in the trust store.
Change-Id: Id3fc86d85641f07814ff8621b8bf038cde406063
Reviewer: Nick Hudson <nick.hudson@isode.com>
Reviewer: Kevin Smith <kevin.smith@isode.com>
|
|
Corresponds to the Swiften change of the same name, d949d1638c
Test-information:
Unit tests pass. Verified that the new code works as expected in
a test application that previously would never see timeouts.
Change-Id: I95cc73a81e42d6ac00c79f74531e8dd6c67882f3
|
|
Since the initial Stroke TLS implementation was done, some changes
were made in Swiften, starting with
"Show Certificate dialog from certificate error window."
159e773b156f531575d0d7e241e2d20c85ee6d7cA
which mean that certificate verification uses the peer's certificate
chain, and not just the peer's EE certificate.
This change updates Stroke so that its API now more closely matches
what Swiften does.
Note that any current Stroke clients that implement the
"CertificateTrustChecker" interface will break, as this patch makes an
incompatible change to that interface, requiring implementing classes
to handle a certificate chain rather than a single certificate.
Isode copyright notices are updated; Remko copyright notices are
updated to reflect the current copyright notices in any equivalent
Swiften source files.
Test-information:
Used MLC (after having patched it for CertificateTrustChecker changes)
and verified that it sees the entire certificate chain coming back.
Ran self-tests for Stroke and saw no junit failures
Change-Id: I3d863f929bfed3324446cadf3bb4d6b9ff916660
|
|
Before this patch, some classes used their own private functions for date time functions.
This patch makes them use the one from DateTime class.
Test-information:
junits pass
Change-Id: I1330c55fbf65205516d6847e4655992ad817fbc4
|
|
The class IQRouter has a private "jid_" field that was not being
initialised to contain an invalid JID, which meant that there was a
risk of NullPointerException if anyone called the "getJID()" method
and tried to use the returned JID.
This showed up because one of the unit tests was getting a
NullPointerException, which caused the failure:
[junit] Test com.isode.stroke.queries.requests.GetPrivateStorageRequestTest FAILED
The failure was shown to have been introduced by the change "Check
sender on incoming IQ responses"
(535e1a979a164f807aa64bf2df2bb36e7015ff17)
This change fixes the initialisation. The other fields in this class
are always initialised so can never be null.
Test-information:
After this patch, unit tests no longer show the failure.
Change-Id: Idfcabf5393c8353194dddc414d58c37301487908
|
|
Import the class SimpleEventLoop from Swiften into Stroke. This also involves renaming the current
SimpleEventLoop class to ImmediateEventLoop
Test Information:
By code inspection.
Change-Id: Ie108a7b3ff98bb078cdd0017f4536e8bd9b76956
Signed-off-by: Alex Clayton <alex.clayton@isode.com>
|
|
Change-Id: I4e5368f9ac86446b7ebf976e2cb63d64ebefe7b2
|
|
The Connector class had "_xmpp-client._tcp." hard-coded in it, which
meant that it was not suitable for non XMPP clients.
This change means that Connector could now be used by clients who are
interested in arbitrary SRV records; the CoreClient class is updated
accordingly.
Test-information:
Built and tested using MLC.
Also tested with a client that is interested in IMAP SRV records
Change-Id: Ia23c148fd8afdd7b3271c47b1c96d086d57a44bd
|
|
Change-Id: Ie8ca77ba8dbcd83926d46307ad0e73d804ff7422
|
|
This patch corresponds with the Swiften commit
5f1cb0d768265347bc80862c33f5967f07759b10 whose comment reads
Release-Notes: Fixed a bug whereby the sender of an iq wasn't being
checked before matching it to a request.
Note that since the Swiften change, other modifications have been made
to the affected files, and these modifications are not reflected in
this patch.
Test-information:
Code builds. Ran with MLC to make sure things all seem to work OK.
Change-Id: Ife96925d4d728bc0fe749d6b5b849fbe4e866315
|
|
Old code was casting Object[] to String[], which may be safe,
but is dependant on the Set's internal implementation of
toArray, and may lead to ClassCastExceptions. We now
preallocate a String[] to avoid the cast and force type
safety for any implementation.
Test-information:
Was crashing when enabling restricted ciphers on Android. Now
works OK.
Change-Id: I759a369449296f1819e91a25aa123b083ec280c9
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
A recent change made Stroke use the dnsjava library instead of JNDI for
domain service queries, because JNDI had problems with IPv6 addresses.
The change also replaced the use of Java's standard InetAddress class for name
resolution with the Address class inside dnsjava - this change was not neccessary, and
is problematic because although the documentation for "Address" says that it "Includes functions
similar to those in the java.net.InetAddress class", it does not provide equivalent functionality.
Specifically, whereas InetAddress.getAllByName() will use the local system's
"host" file when attempting to resolve hostnames, the corresponding Address.getAllByName() method
in dnsjava does not do this.
This means that if a user inserts values into /etc/hosts, they will be ignored by
the Address.getAllByName().
As a result, users who had expected stroke to honour values in /etc/hosts
(which is something you might want to do just for testing purposes) will be surprised
when it stops doing this.
So this patch reverts the code in question to use InetAddress instead of dnsjava's Address class.
Test Information
I added the following lines to my /etc/hosts file.
127.0.0.1 alexmac.com
127.0.0.1 alexmac.clayton.com
At time of the testing there already existed an external domain with name alexmac.com
but none corresponding to alexmac.clayton.com.
I then ran the 'Check DNS for a domain...' dialog in the MLC Help Menu.
Before the patch this would give me the the details for the external domain
for 'alexmac.com' and say no DNS could be found for 'alexmac.clayton.com'.
After the patch the correct details (i.e 127.0.0.1) were returned for both domains.
Also, before the patch I could not connect to the local xmpp server 'alexmac.com'.
After the patch I connected correctly.
Change-Id: If7f15b8aa98313278a1892eb27a5f73aaea8802b
|
|
There are limitations when using JNDI for DNS lookups, including that
it does not properly handle the situation when resolv.conf contains
IPv6 addresses (Isode bug #44832) - see e.g.
http://java.net/jira/browse/JITSI-295
JNDI is also not readily available on Android, which makes it slightly
more awkward to use Stroke on that platform.
This patch changes the PlatformDomainName classes so that they use
classes from dnsjava rather than JNDI.
The patch also updates the build scripts so that dnsjava.jar is
fetched (if necessary) and included in the build.
Indentation in build.xml has been tidied up
Test-information:
Ran unit tests - ok
Ran MLC - works OK and no longer throws NumberFormatExceptions
when resolve.conf contains "nameserver 2001:470:f052::2"
Change-Id: Iacf1105c52c281f9e59b60ea6caa011914b588dc
|
|
The example code includes references to Swing, which isn't available
for all environments (e.g. Android) and so this change provides an
alternate build target to allow stroke.jar to be built without
processing the example code.
The original "dist" target was incorrect in the way it was creating
the jar file, because it was creating a standalone MANIFEST.MF file
(which didn't get used for the jar file at all). So that has been
corrected (for the dist-with-examples target).
So if now do
% ant -Dnoexamples=1
Tnen no examples will be built.
If you do
% ant
then the jar file will include examples (as before) and will also have
a manifest that specifies "Main-class" properly.
Note that this change has already been made to the isode repository
and will not need applying there.
Test-information:
Tested building with/without examples. The jar file appears correct in
each case.
Prior to this patch, saying "java -jar stroke.jar" did not work,
because the manifest wasn't being used properly. After this patch, it
does (for the jar file that includes examples).
Change-Id: I68eadc4355cb655dd31e6afec48405a6fe2c057e
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
This is a change that was made in the isode repository some months ago
(as part of some other isode-specific changes) which did not get
propagated into the swift repository.
If you're on a system with Java7, then by default when you build
Stroke you'll get classfiles that only for for Java7 and later (you
can't run them under Java6 for example). This causes problems in two
specific cases:
1) some unit tests fail with java.lang.VerifyError
2) stroke's jar file will not be compatible with Android
The unit tests which fail show errors like this:
<error message="Instruction type does not match stack map in method com.isode.stroke.base.ByteArrayTest.byteify([I)[B at offset 31" type="java.lang.VerifyError">java.lang.VerifyError: Instruction type does not match stack map in method com.isode.stroke.base.ByteArrayTest.byteify([I)[B at offset 31
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:186)
</error>
this appears to be due to a limitation of Cobertura 1.9, and is
supposedly fixed with Cobertura 2.0.3
https://github.com/stevesaliman/gradle-cobertura-plugin/issues/2
However, when I tried using the updated version of cobertura there
appear to be other issues, so I think that needs looking at
separately.
The other problem with 1.7 is that Android doesn't yet support 1.7
format class files, and so you need to build with -target=1.6 if you
want to be able to use the resultant stroke.jar on Android.
So for these reasons, and because Stroke has no need of any 1.7
features, it seems pragmatic to change the "source" and "target"
parameters of the build files to use 1.6.
I'll look at the cobertura thing separately.
Test-information:
Checked out stroke, added this change, did a build/test to make sure
things worked ok. Unit tests work ok (before this change, they fail
with java.lang.* errors)
Change-Id: I8ad3b8e341eebef13ae647d6e66706e4265432ca
|
|
This patch should provide more information when Stroke receives invalid xml
or if an exception occurs.
Test-information:
deliberately caused an IllegalArg exception from an xmpp client and verified
that I received the exception message in logs and the xml
Change-Id: Id86b530f73f22c85ca36e54042ff7af74d55437d
|
|
Some discussion followed the "Fix synchronization problem in
ByteArray" patch, and that led us to believe that it would be better
to change the JavaConnection class so that it does not rely on being
able to pass ByteArrays around in a way that makes them vulnerable to
the problems that had been seen.
The JavaConnection class accepts a ByteArray in its "write()" method,
and emits a ByteArray when it has read data.
ByteArrays are not the ideal way for the JavaConnection class to
manipulate data and so this patch changes the implementation so that:
a) the "write()" method extracts the byte[] from the supplied
ByteArray and uses these objects, rather than keeping
references to the ByteArray objects (which might lead to
synchronisation issues).
b) the "doRead()" method uses a ByteArrayOutputStream to hold incoming
data, and only constructs a ByteArray out of it when it is ready to
return the data to the application.
These changes make the class more efficient, since in the case of (a),
the need to create temporary ByteArrays is removed, and in (b) the
code no longer creates ByteArrays by iterating through the network
data one byte at a time and appending it to a ByteArray.
It also means that the "synchronized" patch (which would fix the
problem) is no longer necessary, and so that code is reverted.
Test-information:
I patched the code to emulate the situation that would occur when a
buffer is only partially written, and verified that in this case it
correctly re-inserted the unwritten portion of the buffer at the
front of the pending queue.
Ran MLC to various servers, all seems to work OK.
Tested in Harrier, seems to work OK, and does not exhibit problems
that we had seen previously which led us to investigate this issue.
Change-Id: Ifcda547402430c87da45ba7d692518b5af285763
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
We noticed that in certain circumstances a stream of data being sent
to a server was being corrupted.
According to the "onDataWritten" signal, we could see that the data
which Stroke thought it was writing was valid, but by adding debug
code to the JavaConnection class, we could see that what was actually
being sent over the socket was wrong. For example, where
"onDataWritten" would report something like
some text for the server
the actual data being written to the socket (as shown by
toString() of the bytestream) would be something like:
some text fo\200\300\200\300\200\300\200\300\200\300\200\300\200\300\200\300\200\300\200\300\200\300\200\300
i.e. the length of data is correct, but the last part of the buffer is broken.
We saw this on non-TLS connections, but never on TLS connections.
The reason for this (verified after some debugging) is that the
"ByteData.getData()" method was unsynchronized. In the failing cases,
two threads are calling this method at once. The first one finds that
"dataCopy_" is null, and so new's it and starts filling it with data.
The second thread calls "getData()" before this completes, which means
it sees "dataCopy_" as non-null, and uses that value (even though the
first thread hasn't finished populating it yet).
In the failing scenario, the two threads involved were (1) thread that
was handling the "onDataWritten()" callback (which called "getData()"
to get a String that it sent to a debug stream) and (2) the
JavaConnection code (which wants to write the data to the socket).
It seems likely that the reason this doesn't happen for TLS
connections is that in that case, the JavaConnection object will be
processing a ByteArray object that has been generated via the
SSLEngine (rather than the one which "onDataWritten()" sees, and so
the chance of two threads both calling "getData()" is reduced.
(I have not followed the TLS code path thoroughly to verify this).
So this change makes any method in ByteArray that touches "dataCopy_"
be synchronized (as well as hashCode() as suggested by findbugs)
Test-information:
Having inserted some debug code, I could reproduce the "data
corruption" problem reliably.
After adding the "synchronized" directive to "getData()", I could no
longer reproduce the corruption.
Ran MLC with this patch and works with no problems
Change-Id: I02008736a2a8bd44f3702c4526fd67369a3c136a
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
If a TLS connection results in the server choosing an anonymous cipher
suite, then no server certificate will be returned by the server.
This ought not to happen, since XMPP clients are expected only to
propose non-anonymous cipher suites, but it could be that a client is
coded to propose anonymous suites, or that a bug in the server means
that it fails to return a server certificate.
This change updates the ServerIdentityVerifier to make it resilient
against these situations, treating this situation as equivalent to
"certificate presented by server does not verify".
Test-information:
In my testing, I was deliberately using anonymous ciphers and getting
Stroke crashes. After this patch, I don't get Stroke crashes any more
(but the connection fails because the certificate verification fails).
Change-Id: Ia7b9b8dad7a054ff266a78ef33a56157320654c8
|
|
In the PlatformDomainNameResolver class there is a DomainNameAddressQuery
class (accesible via DomainNameResolver->createAddressQuery()) for
performing a DNS lookup on a given domainname. This should have been
returing the set of all HostAddress associated with a given domain, but
instead was only returning a singleton set (or empty if there was no dns).
This patch fixes this by changing the method call from
InetAddress.getByName() to InetAddress.getAllByName().
Test-information:
Tested on top of my MLC Diagnose SRV patch. For 'google.com' we now see a
full list of ip addresses associated with it, rather then just the one.
Change-Id: I6e57c16bb64f76048f16bcff8ee9c1924049a051
|
|
This change moves responsibility for creating the TLSContextFactory
from CoreClient into NetworkFactories, which is in line with the
Swiften implementation.
This means that a caller may now provide his own concrete
TLSContextFactory using code of the form:
NetworkFactories myNetworkFactories;
.
.
myNetworkFactories = new JavaNetworkFactories(eventLoop()) {
@Override
public TLSContextFactory getTLSContextFactory() {
return new MyTLSContextFactory();
}
};
Test-information:
I implemented separate TLSContextFactory and TLSContext classes that
used OpenSSL via JNI) to provide SSL functionality. I was able to
switch to using these with the mechanism that this patch provides.
I also verified that existing code which doesn't try to provide its
own NetworkFactories subclass still works as before (i.e. this patch
doesn't break existing applications).
Change-Id: Ibf07ddbbb4a4d39e4bb30a28be9aa0c43afe005f
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
It was noticed that in certain cases, Stroke got stuck when connecting
to a server over TLS, and the server closed down the connection.
Investigation showed that this appears to be caused by the JSSEContext
code not properly coping with a "close notify" SSL message. What
happens in this case is that the SSLEngine generates a response to the
server's "close notify", and expects this to be sent back over the
network.
The original JSSEContext code saw the "CLOSED" status from the
SSLEngine.unwrap(), but assumed that no more data would be generated
by the engine (which was wrong, because the engine wants to send a close
response back), and so got stuck in a loop.
This patch therefore fixes the JSSEContext code to deal properly when
it sees a CLOSED from unwrap.
After the close has been received, an error will be emitted by
JSSEContext so that the application knows that the SSLEngine can no
longer be used (in practice we have always seen the socket closing,
which generates its own error to the application, but it was
recommended that we should have this check in case a server sends a
close notify and does NOT close the socket as well).
It appears that many servers don't actually send the "close notify",
and just drop the connection, which is (presumably) why we'd not seen
this behaviour before.
Test-information:
Tested by connecting to the aforementioned server. This time, when
the connection times out (and the closenotify is sent), we no longer
see a loop, but the application realises what's happens and attempts
to reconnect.
I have been running with this patch in my copy of MLC for two weeks
and have noticed no difference in behaviour - so far as I can tell the
code is not exercised when talking to M-Link but at any rate the patch
isn't causing anything to break.
Change-Id: Id007c923c510ef1b4ce53192105b00296c65c757
|
|
It is possible to have a null selector if socketchannel open failed so adding a
null check in this patch.
Test-information:
sanity tested on linux by connecting/reconnecting on an xmpp service on linux
Change-Id: Idee180ca4aefd1f743705da674b486dd8acc4922
Reviewer: Nick Hudson <nick.hudson@isode.com>
Reviewer: Kevin Smith <kevin.smith@isode.com>
|
|
I left MLC which is an XMPP Client running overnight and noticed "Too many open files"
error when trying to stop/start xmpp server. On doing an "lsof | grep java", I noticed
a large number of open sockets which was presumably the cause of this error.
After this patch, the lsof command shows a constant number of open sockets.
Test-information:
tested on centos vm by doing "lsof | grep java " wc-l"-open sockets do not increase.
Change-Id: I7ddff78a1efb005177427fda21f1d0b92d8ed7cc
Reviewer: Kevin Smith <kevin.smith@isode.com>
|
|
When investigating problems on Solaris, attention focused on the
JavaConnection class, whose implementation appeared to be non-optimal.
The original implementation had a loop which operated on a
non-blocking socket, and looked something like this:
while (!disconnecting) {
while (something to write) {
write data to socket;
if write failed {
sleep(100); // and try again
}
}
try reading data from socket
if (any data was read) {
process data from socket;
}
sleep(100);
}
Because the socket is non-blocking, the reads/writes return straight
away. This means that even when no data is being transferred, the
loop is executing around ten times a second checking for any data to
read/write.
In one case (Solaris client talking to Solaris server on the same VM)
we were consistently able to get into a state where a write fails to
write any data, so that the "something to write" subloop never exits.
This in turn means that the "try reading data" section of the main
loop is never reached.
Investigation failed to uncover why this problem occurs. The
underlying socket appears to be returning EAGAIN (equivalent to
EWOULDBLOCK), suggesting that the write fails because the client's
local buffer is full. This in turn implies that the server isn't
reading data quickly enough, leading to the buffers on the client side
being full up. But this doesn't explain why, once things have got
into this state, they never free up.
At any rate, it was felt that the implementation above is not ideal
because it is relying on a polling mechanism that is not efficient,
rather than being event driven.
So this change re-implements JavaConnection to use a Selector, which
means that the main loop is event-driven. The new implementation
looks like this
while (!disconnected) {
wait for selector
if (disconnected) {
break;
}
if something to write {
try to write data;
}
if something to read {
try to read data;
}
if still something to write {
sleep(100);
post wake event; // so that next wait completes straight away
}
}
Test-information:
Testing appears to show that the problems we saw on Solaris are no
longer seen with this patch (Solaris tests still fail, but later on,
which appears to be due to a separate problem).
Testing shows that this leads to the thread spending much more time
idle, and only being active when data is being read/written (unlike
the original implementation which was looping ten times a second
regardless of whether any data was being read/written).
Testing using MLC seems to show the new implementation works OK.
I was unable to provoke the "write buffer not completely written"
case, so faked it by making the doWrite() method constrain its maximum
write size to 200 bytes. By doing this I verified that the "leftOver"
section of code was working properly (and incidentally fixed a problem
with the the initial implementation of the patch that had been passing
the wrong parameter to System.arrayCopy).
Change-Id: I5a6191567ba7e9afdb9a26febf00eae72b00f6eb
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
Making it Long allows it to hold an XML-unsignedLong value as well
as null values. Before this patch, it was an int and defaulted to 0.
This was not right as int is too small to hold number of seconds for
last activity time and primitive data types do not allow for null values.
Test-information:
tested using an XMPP client to query last IQ on MUC rooms
Change-Id: I6274403610bd60038fd7c235fad3bc2798f38e19
Reviewer: Kevin Smith <kevin.smith@isode.com>
|
|
Some implementations of SSLEngine (notably Apache harmony used in
Android) never return the FINSHED status from calls to wrap or unwrap,
causing the TLSLayer to never emit its completed signal.
With this change, we treat a return of NOT_HANDSHAKING as equivalent
to FINISHED. The NOT_HANDSHAKING will never happen before handshaking
has finished, because the status during handshaking should always be
NEED_WRAP, NEED_UNWRAP, or NEED_TASK.
Test-information:
Tested with OracleJDK and OpenJDK using Isode M-Link Console to ensure
that the behaviour when negotiating TLS is unchanged (debugging shows
that in these cases it always sees the FINISHED status).
Tested on Android. Without this patch TLS handshakes don't complete;
with the patch, they do.
Change-Id: Ied2989cb2a3458dc6b1d2584dcc6c722d18e1355
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
Direct copy of current signal/slot implementation,
with 4 generic parameters.
Change-Id: I4b2cb37fd134e80e8481950030b6e8721f4f2854
|
|
By default, when a TLS connection is established, the SSLContext will
enable all available ciphersuites. This may not be appropriate in
situations where export restrictions apply and higher grade
ciphersuites are prohibitied.
This change allows a caller to configure a restricted set of
ciphersuites to be used when establishing TLS connections.
Callers use the JSSEContextFactory.setRestrictedCipherSuites() method
to configure a list of ciphersuites. Any ciphersuites which are not
included in the list will be excluded in subsequent TLS connections.
If the JSSEContextFactory.setRestrictedCipherSuites() is never called,
or called with a null parameter, then no restriction will apply.
Test-information:
Validated that by calling the new method to restrict the available
ciphers, TLS connections initiated by Stroke only propose ciphersuites
in the restricted list, and connections fail when the server fails to
find an acceptable cipher.
Change-Id: Id0b4b19553a6f386cda27a71f0172410d899218e
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
This patch adds a new "CAPICertificate" class, which can be used to
configure TLS connections that use a client certificate from a Windows
CAPI keystore, including certificates on smart cards.
The JSSEContext class is updated so that "setClientCertificate()"
checks to see whether the CertificateWithKey object that it's been
given is a PKCS12Certificate or a CAPICertificate, and initializes the
appropriate type of KeyStore.
Note that the default behaviour of the KeyStore returned by SunMSCAPI
when choosing a client certificate for TLS authentication is for it to
choose the "most suitable" certificate it finds.
This "most suitable" certificate may not be the one that the user has
chosen, and in fact various certificates in CAPI are not considered by
SunMSCAPI in this case - for example, certificates issued by CAs who
don't appear in the list of acceptable CAs in the server's
CertificateRequest (RFC5246 7.4.4).
The CAPIKeyManager class provided here allows a caller to override the
default behaviour, and force the use of a specific client certificate
(whether it's "suitable" or not) based on the value specified by the
caller when the CAPICertificate object was created.
This also means that it is possible for a user to specify a particular
certificate and use that, even if SunMSCAPI would have thought a "more
suitable" one was found in CAPI.
Test-information:
Tested that P12 based TLS still works
Tested on Windows that I can specify a "CAPICertificate" which is a
reference to a certificate in the Windows keystore whose private key
is held on a smartcard, and that I am prompted to insert the card (if
necessary() and enter the PIN before the TLS handshake proceeds.
Tested on Windows that I can specify a "CAPICertificate" which is a
reference to an imported P12 file where certificate and key are in
CAPI, and the TLS handshake proceeds without asking me for a PIN
Tested that the "CAPIKeyManager" class is correctly forcing use of the
certificate specified by the user, rather than the one which would be
returned by the default SunMSCAPI implementation.
Tested that I can still use "PKCS12Certificate"s to authenticate
Tested that if I try and use a CAPICertificate on a non-Windows
platform, then I can't authenticate, and get errors emitted from Stroke
complaining of "no such provider: SunMSCAPI"
Change-Id: Iff38e459f60c0806755820f6989c516be37cbf08
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
Two things
- the implementation of JavaTrustManager was attempting to instantiate a
TrustManagerFactory with a hard-coded name of "PKIX", which doesn't
work on Android. So instead of that, we ask for the
TrustManagerFactory's default algorithm - which for the standard JRE
still appears to be "PKIX", but which for Android may be something
else.
- the "hack" which had been in place to force the SSLEngine to
perform a TLS handshake has been removed.
Calling "SSLEngine.beginHandshake()" is not guaranteed to make the
SSLEngine perform the TLS handshake, which it typically only does when
it is told to wrap some data from the client. The earlier version of
JSSEContext provoked this by asking it to send a "<" character, and
then removing the leading "<" from whatever Stroke happened to send next.
It turns out that you can force the handshake to start by telling the
SSLEngine to wrap 0 bytes of data from the client, and so this change
removes the hack, and instead calls "wrapAndSendData()" with an empty
buffer as soon as the SSLEngine has been created.
Test-information:
Ran XMPP client that uses TLS and verified that everything still works
as expected.
Change-Id: Ie08d76bd2f5a743320a59bad62a09c1f215c48d6
Signed-off-by: Nick Hudson <nick.hudson@isode.com>
|
|
If since_ is null, calling clone on it was causing a NUll Pointer Exception.
Adding a check fixes it.
Test-information:
Tested by creating a room using an XMPP client - no exception seen after the fix
Change-Id: I25b151ac8e5b25562b8941eb5532fa9b9ea2de6f
|
|
Change-Id: I49cf4cba01452b291655dfccdc134180270c1ff3
|
|
Change-Id: I862e11dc293ce84e0311f1ad470293e07735aeaf
|
|
Change-Id: Ib02394df2c7bb818c2409b1d6f2fc3ad0d938224
|
|
Change-Id: Id2710c674abc19cdf2b37f97fe53288b86c7f367
|
|
Change-Id: Iab58df1cf6a3b8b9461b71fd3f27476214e07286
|
|
Change-Id: Ie2ec5f94e0a1ee381ab43c09465571de94e64b6f
|
|
Change-Id: I373469fa7a7ba8d5c639d4a1f2d4e07182eeb953
|
|
Change-Id: Ib4717891c591911e68a5b27b7af4e666b6296d48
|
|
Change-Id: Ic7adcf9790429c23b9493ec22324198bfc474b6f
|
|
Change-Id: I0e333781b140a97788e35d401e054a413af0ab76
|
|
Change-Id: Ia1460c62f0bce645248b2412a60a6ad7420ae849
|