Saturday, October 26, 2013

Java Synchronization and Concurrency Across Multiple JVMs, Multiple Computers, Part 2

I’m going to add a couple asides to this blog entry.  One of them is at the end, and details an additional consideration for the synchronization discussed in “Part 1” – reading the shared file, in addition to writing to it.  But before we get started with “Java Synchronization Across JVMs, Part 2”, I want to describe Flag Passing in general.  You will still use this general Flag Passing when sharing files on remote systems (not by drive mapping, a network path or a file:// URL), for instance by SFTP.  This is not even near “real-time” file sharing, and should be considered “scheduled” or “periodic” file sharing.

Our sample scenario is that one system (SOURCE) creates files and a second system (TARGET) processes those files, then deletes them.  When SOURCE creates a file, we will have it also create or update a file named “mark.mark”.  When TARGET connects (periodically) to SOURCE, the first thing TARGET will do is check for a file named “mark.mark” and get the date of the file.  Then TARGET will read and delete all files with modification dates before the date of “mark.mark”.  The date of the file “mark.mark” is the flag we are passing.  This is just one example of Flag Passing, but it helps us get our bearing for the following discussion.

When I was in high school in the 1970’s, my dad was stationed overseas in a relatively remote location with the US Air Force.  He called home every week, and it went like this.  He would say “Hello, I love you.  Over.”  And somewhere, somebody switched something so that we could reply.  At the end of our greetings, we said “Over” and dad could start talking again.  If we failed to say “Over”, then we would sit there with silence until we remembered.  It was like a conversation on a Citizens Band radio with a couple major differences: 1) there was a guy in the middle making a change to allow conversation to flow in the opposite direction, and 2) it was a conversation between only two parties (unless the NSA was listening in.)  Let me call this kind of coordination “flag passing”, since I don’t know a more technical term (perhaps semaphores or signaling).

Those conversations with my dad are the analogue, except for the middle-man, of another way to do Java (or any program) synchronization across operating environments.  (Actually, the analogy has several applications, but it also breaks down pretty quickly.  However, I like the recollection, so I’ll leave it in here.)  In this blog discussion, the presence of a file IS the flag that indicates something is available to be processed.

In this discussion, we are going to examine the case where multiple files (of the same name) are created, and a separate process (perhaps in a separate JVM) deals with the file and deletes it, when it appears.  First we will consider the case where the file is created by only one “Talker” and in all this blog we are limiting this discussion to the case where there is only one “Listener” dealing with the file.  The next case, which is really the base solution, is when there are multiple “Talkers”, actually one or more.  This first case does not use file locking, and it is inherently faulty – we only present it here in order to examine the problem.

The OneTalker class writes some output to a file named “message.txt”.  Before the file is written, OneTalker checks to see if the file already exists.  If it exists, then the Listener has not yet read, processed and deleted the file, so we sit and wait (sleep) until the file does not exist.  Then we simply write and close the file.  There is nothing inherently “exclusive” about this process, and therein lies the problem (discussed below).  Note that there is no added value in having OneTalker sleep for a second while waiting for the existing file to disappear.  We could write that loop like this: while(outFile.exists()){}

public class OneTalker {
public static void main(String[] args) {
// Set true to sleep forever (practically) with file open
boolean testNoClose = false;
BufferedOutputStream out = null;
try {
// This loop is for demo purposes - 5 iterations
for( int i = 0; i < 5; i++ ) {
File outFile = new File( "C:\\dac\\message.txt" );
// Listener has not picked up previous file
while( outFile.exists() ) {
Thread.sleep( 1000 );
}
System.out.println( "New File" );
out = new BufferedOutputStream( 
                                     new FileOutputStream( outFile ) );
// Handle data as byte array - most flexible
String newString = "message: " + i;
byte[] outBytes = newString.getBytes();
out.write( outBytes, 0, outBytes.length );
out.flush();
if( testNoClose ) Thread.sleep( 1000000000 );
out.close();
// Space out the messages - 10 seconds
Thread.sleep( 10000 );
}
} catch( Exception x ) {
x.printStackTrace();
} finally {
try {
if( out != null ) out.close();
} catch( Exception y ) {}
}
System.exit(0);
}
}

In the examples in this blog discussion, I am writing and reading byte arrays.  Unless you are always writing a line of text with a line-end (for example; carriage return / line feed) , as in Part 1 of this blog topic -- the centralized log file, then writing byte arrays is preferred.  [One exception is when you are writing and reading serialized objects.]

For testing purposes, we have a “for” loop that runs through five iterations of the file creation process.  Then to get a little reality “feel”, we sleep for ten seconds between iterations.  Note the Boolean, testNoClose – we will discuss how to use that below.

The Listener code, OneListener checks to see if the file exists, and if so, processes the file and then deletes it.  Again, there is nothing exclusive in this process, and if the file is currently being written, two problems will occur: 1) the current partial file will be read, and 2) the file will not be deleted.  Normally, with this test code, the time required to write or read the file is so short that it would be difficult to experience the problem, but you should never code for “best-case-scenario”.  Instead, as a secure programmer you need to see the potential problem and code for it (which we will do).

You can test for this problem by setting the boolean, testNoClose in OneTalker to true.  This will create the file and then go into a long sleep without closing the file.  In that state, when you run OneListener, the same (partial) file will be read and processed repeatedly, and you will see that the file does not get deleted.

public class OneListener {
public static void main(String[] args) {
BufferedInputStream in = null;
try {
// Same number loops as Talker
for( int i = 0; i < 5; i++ ) {
File inFile = new File( "C:\\dac\\message.txt" );
// Wait for file to appear
while( ! inFile.exists() ) {
Thread.sleep( 1000 );
}
// This is where problem may occur – 
                                // file may be being written
// In that case, a partial file will be 
                                // read and delete will fail
in = new BufferedInputStream( 
                                    new FileInputStream( inFile ) );
byte[] inBytes = new byte[2000];
int readQty;
while( (readQty = 
                                     in.read( inBytes, 0, inBytes.length )) > 0 ) 
                                {
System.out.write( inBytes, 0, readQty );
}
System.out.println();
in.close();
inFile.delete();
if( inFile.exists() ) 
                                     System.out.println( "Delete failed" );
}
} catch( Exception x ) {
x.printStackTrace();
} finally {
try {
if( in != null ) in.close();
} catch( Exception y ) {}
}
System.exit(0);
}
}

Just for consistency in our demonstration, OneListener has a “for” loop set to consume as many messages as OneTalker will be sending.  OneListener also tests to see if a message file exists.  If no message file exists, OneListener waits a second (but doesn’t have to) and checks again.  When a message file exists, OneListener consumes it then deletes the file.  That is another flag of sorts that indicates to OneTalker that he may send another message.

So let’s fix the problem with OneTalker / OneListener, and expand our scenario.  We expand the Talker code in ManyTalker to lock the file while writing.  Since we are locking the file, we can assure that only one Talker instance will be writing at any one time – so now we can handle multiple parties writing the file.  ManyTalker includes all the same code as OneTalker, with the addition of the FileLock and an extra, labeled RETRY while() loop for handling the FileLock acquisition.  Whenever we cannot acquire the lock on the file, or experience an Exception while trying to acquire the lock, we loop to the RETRY label.  When we acquire the lock, we break out of the RETRY while() loop and write the file as we did in OneTalker.

public class ManyTalker {
public static void main(String[] args) {
// Set true to sleep forever (practically) with file locked
boolean testNoRelease = false;
FileLock fl = null;
BufferedOutputStream out = null;
try {
// This loop is for demo purposes - 5 iterations
for( int i = 0; i < 5; i++ ) {
File outFile = new File( "C:\\dac\\message.txt" );
FileOutputStream outFOS = null;
RETRY: while( true ) {
// Someone else is writing or reading file
while( outFile.exists() ) {
    Thread.sleep( 1000 );
}
try {
    // In mean time, someone else might 
                                            // create file and get lock
    outFOS = new FileOutputStream( outFile );
     fl = outFOS.getChannel().tryLock();
     if( fl == null ) continue RETRY;
     // Else I got the lock! 
                                             // Break out of while(true)
     break;
} catch( Exception z ) {
     z.printStackTrace();
     continue RETRY;
}
}
System.out.println( "New File" );
out = new BufferedOutputStream( outFOS );
// Handle data as byte array - most flexible
String newString = "message: " + i;
byte[] outBytes = newString.getBytes();
out.write( outBytes, 0, outBytes.length );
out.flush();
if( testNoRelease ) Thread.sleep( 1000000000 );
// Must release lock before closing stream
fl.release();
out.close();
// Space out the messages - 10 seconds
Thread.sleep( 10000 );
}
} catch( Exception x ) {
x.printStackTrace();
} finally {
try {
if( fl != null ) fl.release();
if( out != null ) out.close();
} catch( Exception y ) {}
}
System.exit(0);
}
}

Note that the FileLock is acquired on the FileOutputStream.  It is an exclusive lock, the only kind we’ve seen so far, using the FileChannel.tryLock() method.  As in OneTalker, but in a different order, we encapsulate the FileOutputStream in a BufferedOutputStream for efficiency.  Note that the FileLock must be released before you close the stream!

Finally, we have the ManyListener class, which shares the file lock with ManyTalker.  In this case, the name ManyListener may be deceiving.  There should only be one Listener class.  The reason for that is the way we need to get a lock on the FileInputStream for reading – we pass the start value (0), the length to be read (Long.MAXVALUE) and a Boolean which indicates we are locking the file in “shared” mode.  This type of lock will keep any of our ManyTalker instances from acquiring an “exclusive” lock, but another ManyListener instance could also acquire a “shared” lock on this same file – so we should only have one ManyListener instance.

public class ManyListener {
public static void main(String[] args) {
// Set true to sleep forever (practically) with file locked
boolean testNoRelease = false;
BufferedInputStream in = null;
FileLock fl = null;
try {
// Same number loops as Talker
for( int i = 0; i < 5; i++ ) {
File inFile = new File( "C:\\dac\\message.txt" );
FileInputStream inFIS = null;
RETRY: while( true ) {
// Someone else is writing or reading file
while( ! inFile.exists() ) {
    Thread.sleep( 1000 );
}
try {
    // In mean time, someone else might get lock
    inFIS = new FileInputStream( inFile );
    // This is a shared lock, 
                                            // required for reading FileInputStream
    // It will keep Talker from acquiring 
                                            // an exclusive lock
    fl = inFIS.getChannel().
                                            tryLock(0, Long.MAX_VALUE, true );
    if( fl == null ) continue RETRY;
    // Else I got the lock! 
                                            // Break out of while(true)
    break;
} catch( Exception z ) {
    z.printStackTrace();
    System.exit(0);
    continue RETRY;
}
}
in = new BufferedInputStream( inFIS );
byte[] inBytes = new byte[2000];
int readQty;
while( ( readQty = 
                                     in.read( inBytes, 0, inBytes.length ) ) > 0 ) 
                                {
System.out.write( inBytes, 0, readQty );
}
System.out.println();
if( testNoRelease ) Thread.sleep( 1000000000 );
// Must release lock before closing and deleting
// OK, since Talker won't write to an existing file
// And there should only be one Listener!
fl.release();
in.close();
inFile.delete();
if( inFile.exists() ) 
                                    System.out.println( "Delete failed" );
}
} catch( Exception x ) {
x.printStackTrace();
} finally {
try {
if( fl != null ) fl.release();
if( in != null ) in.close();
} catch( Exception y ) {}
}
System.exit(0);
}
}

As a challenge, you might look at my use of the separate “lock.lock” file on which we acquire an exclusive FileLock, in Part 1 of this blog discussion, and use that same locking paradigm to implement a true “ManyListener” – that is, supporting multiple instances of ManyListener.  Note that ManyTalker will have to be similarly modified in order to share the FileLock on “lock.lock”.  There is a simple alternative modification that can be made to ManyListener – use a RandomAccessFile instead of the FileInputStream / BufferedInputStream combination.  When you instantiate the RandomAccessFile, declare the read/write mode (“rw”).  Then  you can get an exclusive lock (via FileChannel.tryLock()) and read the file.  However, there are some efficiency losses in that solution.  A picture is worth a thousand words, and so is code, so here is the RandomAccessFile solution.

public class ManyListener {
public static void main(String[] args) {
// Set true to sleep forever (practically) with file locked
boolean testNoRelease = true;
RandomAccessFile in = null;
FileLock fl = null;
try {
for( int i = 0; i < 5; i++ ) {
File inFile = new File( "C:\\dac\\message.txt" );
RETRY: while( true ) {
while( ! inFile.exists() ) {
    Thread.sleep( 1000 );
}
try {
    // Note that to get an exclusive lock, 
    // you must open the file for read/write
    in = new RandomAccessFile(
                                                "C:\\dac\\message.txt", "rw" );
    fl = in.getChannel().tryLock();
    if( fl == null ) continue RETRY;
    break;
} catch( Exception z ) {
    z.printStackTrace();
    System.exit(0);
    continue RETRY;
}
}
byte[] inBytes = new byte[2000];
int readQty;
while( ( readQty = 
                                    in.read( inBytes, 0, inBytes.length ) ) > 0 ) 
                               {
System.out.write( inBytes, 0, readQty );
}
System.out.println();
if( testNoRelease ) Thread.sleep( 1000000000 );
fl.release();
in.close();
inFile.delete();
if( inFile.exists() ) 
                                    System.out.println( "Delete failed" );
}
} catch( Exception x ) {
x.printStackTrace();
} finally {
try {
if( fl != null ) fl.release();
if( in != null ) in.close();
} catch( Exception y ) {}
}
System.exit(0);
}
}

For further research, you might try running the pair of ManyTalker and ManyListener with the Boolean “testNoRelease” set to true (in one at a time). You will see how they wait indefinitely for one another to release the lock.

One more challenge, if you care to extend this, is to consider how these ideas can be adapted to a two-way file exchange where each class both Talks and Listens, synchronously (I speak and you listen, then we switch roles) or asynchronously (I speak, and either you speak or listen, and I might speak again before you reply).  I can imagine the code but can’t think of a good application – perhaps “Auto-Chat”.

Now our second “aside” -- this should have been included in the “Part 1” blog of this discussion.  In addition to writing to the centralized log, you may want to read from it and place the output in a report web page.  In this case, you should acquire a FileLock on the same file used for writing to the log, as shown in the reportCentralLog() method, below.  The only concern here is that, the way this is written, you will retain the lock while you read through the entire file – thus keeping others from writing to the log for an extended period.  This is probably not a good idea.  Perhaps you should only read the tail end of the file (for example, get the file length (e.g.,  50,000) and start reading from an offset 5,000 before the end (e.g., using the byte array reading paradigm, in.read( inBytes, 45000, readQty);) ).

private static String reportCentralLog( String message ) 
            throws ServletException 
        {
StringBuffer rtrnSB = new StringBuffer();
// Need something external to this JVM to test singularity
FileOutputStream fos = null;
FileLock fl = null;
BufferedReader centralIn = null;
try {
fos = new FileOutputStream( "\\\\server\\dir\\lock.lock" );
fl = fos.getChannel().tryLock();
// Null when can't get exclusive lock on file
while (fl == null) {
try {
Thread.sleep(1000);
fl = fos.getChannel().tryLock();
} catch (Exception v) {}
}
// At this point, I have exclusive lock on file!
centralIn = new BufferedReader( 
                            new FileReader( "\\\\server\\dir\\central.log" ) );
// File read (and written) a line at a time
String inString;
while( ( inString = centralIn.readLine() ) != null ) {
rtrnSB.append( inString );
}
} catch( Exception x ) {
throw new ServletException( x.toString() );
} finally {
try {
if( centralIn != null ) centralIn.close();
} catch( Exception y ) {}
try {
if( fl != null ) fl.release();
if( fos != null ) fos.close();
} catch( Exception y ) {}
}
return rtrnSB.toString();
}

Coding a Fast Browser Proxy Script

Well, I checked Google and wasn’t satisfied that this information was generally available…

So, I’m reviving some research I did in 2003 that led to significant speed increase in dealing with browser proxy determination.  The enterprise environment where you would apply this tactic is one where you have multiple routes to internet and corporate resources, multiple on-site subnets (including non-routed subnets) and varying levels of authentication and html tag filtering that you want to enforce.
The problem with most example proxy scripts that address these issues is that they use a cookie-cutter approach to determining where to forward a user-entered request.  Often they use the most basic approach which is to do a DNS lookup on the hostname and see if it belongs to a specific subnet.

Eventually, most non-local addresses need to be determined by a DNS lookup, but DNS lookups are costly in time and network resources.  It is best to reduce DNS lookups required, or even to eliminate them, where possible.

One clear example where DNS lookup is not required is where the host name contains the string of your corporate subnet name; for example, apps.org.com contains the subnet name org.com.  For that host, the proxy script should return immediately with the directive “DIRECT”, which means don’t go through the proxy.

But in a large enterprise environment, there may be hosts on the open internet (on a DeMilitarized Zone, DMZ network) that have the same subnet name, like www.org.com.  That host will need to be addressed by enterprise workstations, using the proxy script, through a proxy server (and through a router off the enterprise LAN).  This test needs to be done before the check described above, where addresses with our enterprise subnet name are directed to be found “DIRECT”.

Here is the basic script (e.g., proxyscript.pac) to get us this far:

function FindProxyForURL(url, host)
{
    var mHost = host.toLowerCase();

    if( (mHost == "public1"     ) ||
        (mHost == "public2"     ) ||
        (mHost == "public1.org.com") ||
        (mHost == "public2.org.com")
    )
        return "PROXY clearproxy.org.com:8080";

    // dnsDomainIs() resolves upper and lower case domain
    if( dnsDomainIs(host, ".org.com") )
        return "DIRECT";

This script, so far, does not do any DNS lookups.  Notice that the “host” parameter that is being passed to the FindProxyForURL() function is whatever hostname the user enters in the browser address line.  For local hosts, the subnet name (“org.com”) does not need to be included, so we represent the host name in our script, both with and without the subnet name.  These hosts, since they are ours, might be available through a proxy (e.g., clearproxy) that does not require authentication and perhaps does not do html tag filtering.  It is a security measure that is often enforced, that everything within a set of <applet></applet>, <embed></embed>, and/or <object> tags is removed by the proxy before the page is delivered to the client browser.

There may be some additional hosts on the open internet that belong to our Parent company, for which we also don’t want to do authentication or tag filtering.  We can direct the browser to the correct proxy, again without doing a DNS lookup for those specific hosts:

    if( (mHost == "www.parent.com"        ) ||
        (mHost == "www.sister.com")
    )
        return "PROXY clearproxy.org.com:8080";

At this point, we are going to have to do a DNS lookup to get the address of the host.  We will then use the host IP address  to determine the proxy directive required.  If the host IP address is bogus, we simply return the “DIRECT” directive – whether it works for the browser or not.

    var HostIP = "999.999.999.999";

    // First 1 or 2 DNS queries here
    if( isPlainHostName(host) || isResolvable(host) )
        HostIP = dnsResolve(host);

    // On bogus HostIP, or localhost, we are done!
    if( (HostIP == null             ) ||
        (HostIP == "999.999.999.999") ||
        (HostIP == "127.0.0.1"      ) ||
        (HostIP == ""               )
    )
        return "DIRECT";

Next we want to identify our local, enterprise subnets by IP address, and return the directive “DIRECT”, as in “no proxy required”.  We should order this list by likelihood that the browser will be addressing hosts in that subnet, because each test may require a separate DNS query.  For example, if our data center is in subnet 111.11.0.0, and our desktop workstations are in subnet 122.22.22.0, then we would list 111.11.0.0 first, since browsers would most likely be addressing hosts in our data center.  We also include non-routed subnets, if needed.

    if( isInNet(HostIP, "111.11.0.0"     ,"255.255.0.0"    ) ||
        isInNet(HostIP, "122.22.22.0"    ,"255.255.255.0"  ) ||
        isInNet(HostIP, "192.168.0.0"    ,"255.255.0.0"    ) ||
        isInNet(HostIP, "172.16.0.0"     ,"255.240.0.0"    ) ||
        isInNet(HostIP, "10.0.0.0"       ,"255.0.0.0"    )
    )
        return "DIRECT";

If you deal with users, you will find that some will have a unique need to get to a server on the internet that cannot handle your usual tag filtering.  Or perhaps you have some users that need to get to a site (for example a benefits site, like Blue Cross Blue Shield) but who don’t have credentials to authenticate to the proxy.  Those kinds of exceptions will require a separate block in your proxy script, like this:

    if( isInNet(HostIP, "33.33.33.0", "255.255.255.0" ) )
        return "PROXY alternateproxy.org.com:8080";

Finally, you may have several dedicated network routes to corporate subnets (Parent or sister companies) that are not local, for which unique proxy provisions apply (probably no authentication and no tag filtering to resources on those subnets).  For those subnets, you will go through an appropriate proxy, and for EVERYTHING ELSE, you will send the browser through your standard proxy.

    if(
        isInNet(HostIP, "144.44.0.0"    ,"255.255.0.0"    ) ||
        isInNet(HostIP, "155.55.0.0"     ,"255.255.0.0"    ) ||
        isInNet(HostIP, "166.66.66.0"  ,"255.255.255.0"  ) ||
        isInNet(HostIP, "177.77.77.77" ,"255.255.255.252")
    )
        return "PROXY parentproxy.org.com:8080";
    else
        return "PROXY externalproxy.org.com:8080";
}

Java Method Access Modifiers

In talking to my son, Matthew, who is also a Java programmer, I described a simple understanding of the access modifiers applied to methods: default (package), public, private and protected.  In summary, I said:

1)        Protected is for situations you will rarely, if ever, encounter – keeping methods from being implemented in a subclass

2)        Public is for when you write a method that you want everybody to call

3)        Private is for methods that you only ever want called from this specific class

4)        No-modifier (default or package) is appropriate for almost everything – it allows your method to be called by other classes in your package, but not by other folks’ classes (in other packages)

I mentioned that I had on occasion abused those rules when using classes written by others.  A couple times, I created an empty folder hierarchy in my project that mirrored someone else’s package and created my class there so I could use their package-access resources.  For example, I once extended sun.net.ftp.TransferProtocolClient, creating my own FTPClient class with the addition of methods to do such things as proxy login, make directory, make path and chmod.  That was in 2003, when Sun was pretty strict about the sun.* packages and classes.  I just now edited that FTPClient code in Eclipse, referring to a new JDK and found that not only does sun.net.ftp.FtpProtocolException no longer extend IOException (requiring major code updates); but also, I can extend TransferProtocolClient in a different package with no problem.  In this case, I no longer need to mirror the original package in order to extend the class – and I conclude that the resources I need to access are no longer package-protected, they are public.

Saturday, October 12, 2013

New Oracle Installation Lockdown


It is my standard practice to turn off services and applications which are not needed.  I do this as an administrative account; however, I regularly run as a non-administrative account, and you should too.

1)        Typically, I remove everything from Windows Start Menu / All Programs / Startup (C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup ).

2)        Then I run the Registry Editor (regedit.exe) and search for Keys that match the whole string “run”, as shown below.  Search the entire registry (find next from top to bottom) and comment applications that need not run (some call this stuff “Crapware”).  Typically, I add “x_” in front of the application command, as shown below.  Note that you may need to research each application listed in order to determine whether or not it is needed on your system.







3)        Finally, I run Computer Management (%windir%\system32\compmgmt.msc /s) and set any Services that are not needed from “Automatic” to “Manual” startup.

When Oracle 12c is installed, there are several Services that are created, and most of them are set for “Automatic” startup.  On my development workstation, I only start Oracle as needed (as shown below), but even in production environments, several of these Services may not be required at all times – they should be started only as needed.

Oracle Create Session Role and Kicking Everybody Off


Do not use the default Oracle role, Connect.  Use of that role is deprecated.  Instead, create a role to provide valid users with the Create Session privilege.  (Please refer to my book, “Expert Oracle and Java Security”.)  Do not grant the Create Session privilege directly to users; but rather, grant them your role.  Here is an example:

create role create_session_role not identified;
grant create session to create_session_role;
create user username identified by userpassword container=CURRENT;
grant create_session_role to username;

Note that the qualifier “container=CURRENT” is for Pluggable Database (PDB) instances in Oracle 12c.

You should ask, “besides adhering to a philosophy or rule of standardization, why should I use a role for the Create Session privilege?  After all, all standard users need to be able to connect.”  In this case, our role is less for providing a standard grant for users to connect than it is for providing a mechanism to remove that ability without deleting the users (or shutting down the listener or database.)  Basically, the create_session_role role provides us a mechanism for kicking everyone off an active Oracle instance.

Say, for example, we have a hundred user accounts, and all have been granted the create_session_role role.  At any time, twenty of those users may be connected.  Now, for some security or administrative reason, we need to stop all user activity without shutting down the database.  First we need to assure that no one else can connect, and that the current users cannot reconnect.  This is the easy part.  We simple revoke the Create Session privilege from create_session_role.

revoke create session from create_session_role;

This does not also have the effect of breaking the current sessions.  Those have already been “created”; and so for active sessions, the Create Session privilege has already been used, and is no longer needed.  For active sessions, we need to manually kill them to stop their activity.  This Anonymous PL/SQL Block will kill all active sessions, except the current session in which it is run.

declare
    pragma autonomous_transaction;
    m_sid v$session.SID%TYPE;
    cursor session_cur is
        select serial#, sid from sys.v$session
        where type='USER' and not sid = m_sid;
    session_rec session_cur%ROWTYPE;
begin
    -- Oracle will not let you kill your own current, active session
    m_sid := SYS_CONTEXT( 'USERENV', 'SID' );
    open session_cur;
    loop
        fetch session_cur into session_rec;
        exit when session_cur%NOTFOUND;
        dbms_output.put_line( 'Killing: ' ||
            session_rec.SID || ', ' || session_rec.serial# );
        execute immediate 'ALTER SYSTEM KILL SESSION ''' ||
            session_rec.SID || ', ' || session_rec.serial# || '''';
    end loop;
    close session_cur;
end;

In this block, we create a cursor (session_cur) of active sessions, selected from sys.v$session; so this needs to be run by an account with SYS, SYSTEM or DBA level privileges.  We only care about USER sessions.  Also, we filter out the current session by Session ID (SID).  We find our current SID from the USERENV environment of the SYS_CONTEXT context.

We loop through our cursor, getting each record (session_rec) while there are more to find.  We print out a line to DBMS_OUTPUT for each session we are killing.  And we call EXECUTE IMMEDIATE, passing the ALTER SYSTEM command syntax required to kill each session.

So everybody is kicked off the database, except the current user.  And no one may connect / reconnect.  The current user may grant Create Session to any additional user needed for troubleshooting or research on the current situation.  After the situation is remedied, and normal operations are restored, users can be permitted to connect once again with a single grant:

grant create session to create_session_role;

Now, we are very glad to have a role to distribute this privilege to all users.

Java Synchronization and Concurrency Across Multiple JVMs, Multiple Computers

It is pretty standard fare for a Java application to coordinate multiple uses of a single resource by using the “synchronized” key word.  But there are times when a single resource must be used by multiple programs running in separate Java Virtual Machines (JVMs), perhaps even on different computers.  Synchronizing use of that resource involves a bit more planning.  Let’s look at single-JVM synchronization first so we can see where, how and why cross-JVM synchronization may be applied.

Typically, resources like counters and files that are used by multiple threads (multiple concurrent users) in a web application require synchronization.  In a Java Servlet, the doGet() and doPost() methods are multithreaded.  Each web browser addressing the web application will have a separate thread, executing the method independently and concurrently.  Whenever those methods update a resource that is declared outside of the method scope, it is a shared resource, and that update should be synchronized.
There are a couple easy ways to synchronize use of those resources.  We can make a method synchronized so that only one thread can use it at a time – all other threads will queue up until the synchronized method is available – they will take turns.  A second way to synchronize use of a resource is to update the resource within a synchronized block.  Let’s look at an example.

public class SynchServlet extends HttpServlet {
            private static PrintWriter fileOut; // Synchronize use of this!

            public void init( ServletConfig config ) throws ServletException {
                        super.init( config );
                        try {
                                    fileOut = new PrintWriter( "logfile.log" );
                        } catch( Exception x ) {
                                    throw new ServletException( x.toString() );
                        }
            }
                       
            public void destroy( ServletConfig config ) {
                        fileOut.close();
            }
           
            public void doGet( HttpServletRequest req, HttpServletResponse res )
                                    throws ServletException, IOException {
                        logWrite( "Using doGet()" );
                        //…
            }
           
            public void doPost( HttpServletRequest req, HttpServletResponse res )
                                    throws ServletException, IOException {
                        logWrite( "Using doPost()" );
                        //…
            }
           
            public static synchronized void logWrite( String logEntry ) {
                        fileOut.println( logEntry );
            }
}
 
In this first example, each time a browser calls on the doGet() and doPost() methods we will write to a log file. We need to have these threads take turns, so only one thread attempts to write to the file at a time – if more than one write occurs concurrently, a runtime exception will be generated.  The simplest way get the threads to take turns is to place the actual log file write in a synchronized method.  The example logWrite() method is modified with the “synchronized” key word so that each thread in this web application (Servlet) will wait until the logWrite() method is available, then the thread will lock the method for exclusive use until finished.  After that, the lock will be released, and the next thread will be able to use the method.

This synchronized method is static, so the lock is applied to the SynchServlet class, not to an instance of SynchServlet – that difference is irrelevant in this case, since there is usually only one instance of a Servlet.  However, in most cases, both “static” and “synchronized” modifiers should be used for a single resource that is shared within a JVM.  As a public static method, you can call it from other classes, and it will enforce the same synchronization controls, like this:
            SynchServlet.logWrite( "Some other message" );
 
Another way we can synchronize multiple threads using a shared resource is to put every use of the resource in synchronized blocks.  The following example shows how we might implement a doGet() success counter.

public class SynchServlet extends HttpServlet {
            private static int doGetSuccessCounter = 0;
            private static Object synchObject = new Object();
 
            public void doGet( HttpServletRequest req, HttpServletResponse res )
                                    throws ServletException, IOException {
                        try {     
                                    synchronized( synchObject ) {
                                                doGetSuccessCounter++;
                                    }
                                    //…                              

                                    synchronized( synchObject ) {
                                                System.out.println( "There are " +
                                                            doGetSuccessCounter + " doGet() successes!" );
                                    }
                        } catch( Exception x ) {
                                    synchronized( synchObject ) {
                                                doGetSuccessCounter--;
                                    }
                                    throw new ServletException( x.toString() );
                        }
            }
}
 
Notice in this example that we increment the doGetSuccessCounter integer in the doGet() method, and we decrement doGetSuccessCounter in the catch block – just for fun, we do not count a thread’s passage through doGet() as a “success” if an Exception is thrown.  Again we want to synchronize use of this shared resource (it is declared outside the method scope) so that we count and discount for every thread calling doGet(), without concurrent threads overwriting one another.  In addition, we synchronize around the report of the current value of doGetSuccessCounter so we are guaranteed to get the current value (after any other lock is released and we have exclusive access to the integer.)

Notice that in each case, when we set up the synchronized block, we specify that the synchronization lock be placed on an object named synchObject.  Any Java object can be used for synchronization locking, but not primitives – that is, we could lock on an Integer.class instance but not on an int primitive.  Our example instantiates an object of type Object named synchObject, and we use it for all the synchronized blocks.  Synchronized blocks must share the lock on a single object in order to coordinate their execution – so all our example synchronized blocks share the lock on synchObject.  Of all the synchronized blocks on synchObject, only one at a time may acquire the lock and execute.  The lock is automatically released at the end of the synchronized block.
You might observe that we have three separate synchronized blocks and that we could get rid of all of them if we just make the doGet() method synchronized.  I love code reduction and refactoring, but this is one case where it would be a very bad plan.  If we made the doGet() method synchronized, then only one browser could view our Servlet at a time; also, the entire doGet() would need to execute before the lock is released.  You always want to limit the amount of time and processing that is done in a synchronized block or synchronized method.  Look back at our examples, at the minimal synchronized block / method.

Also note that since synchObject is modified with the “static” keyword, all instances of the SynchServlet class will share the same lock – that is typically what you want for shared locks within a JVM.  If you also share the resource in another class (perhaps another servlet), you would lock on this same static object, like this:
            synchronized( SynchServlet.synchObject ) {
                        SynchServlet.doGetSuccessCounter++;
            }
 
Synchronized methods and synchronized blocks are foundational concurrency tools.  There are a number of other synchronization and concurrency techniques, many Thread-safe classes and the java.util.concurrent packages that may be also used.  All of them deal with synchronization within a single JVM.  Probably the clearest example of when all these techniques fall short is when you are running Java applications on separate computers and need to update a shared resource, like a file.

Typically, you might run an independent service to update the shared file.  You would provide a network port (or messaging) interface to the service so that all the applications that update the file can send messages to the service which would serve as a proxy to update the (shared) file on their behalf.  Alternatively, you could just update a database using JDBC.
However, it is possible to program synchronization into applications so that they can share and update a resource, even when running in separate JVMs.  In this case, we cannot share a lock on a Java object by using the synchronized keyword – no objects are shared between separate JVMs.  Instead we lock on a file.  We depend on the locking mechanisms inherent in the Operating System (OS) filesystem in order to synchronize our efforts.

Here is an example that shows how code running in separate JVMs can share updates to a file.  The shared file that we are updating is named “central.log”.  I’m using the escaped double backskash to indicate the file is found on a fileshare named \\server\dir.  I refer to another file named “lock.lock” on that same fileshare.  We will get an exclusive file lock on the “lock.lock” file before we do any updates to “central.log”.  Note that the method, centralLog() can be included in several applications, running in different JVMs, perhaps on different servers; and they will all be able to coordinate updates to the “central.log” file.
public class SynchServlet extends HttpServlet {
            public void init( ServletConfig config ) throws ServletException {
                        super.init( config );
                        centralLog( "Starting SynchServlet");
            }
 
            private static void centralLog( String message ) throws ServletException {
                        // Need something external to this JVM to test singularity
                        FileOutputStream fos = null;
                        FileLock fl = null;
                        PrintWriter centralOut = null;
                        try {
                            fos= new FileOutputStream( "\\\\server\\dir\\lock.lock" );
                            fl = fos.getChannel().tryLock();
                            // Null when can't get exclusive lock on file
                            while( fl == null ) {
                                      try {
                                                Thread.sleep(1000);
                                                fl = fos.getChannel().tryLock();
                                      } catch( Exception v ) {}
                            }
                            // At this point, I have exclusive lock on file!
                            centralOut = new PrintWriter( "\\\\server\\dir\\central.log" );
                            centralOut.println( message );
                        } catch( Exception x ) {
                                    throw new ServletException( x.toString() );
                        } finally {
                                    try {
                                                if( centralOut != null ) centralOut.close();
                                    } catch(Exception y) {}
                                    try {
                                                if( fl != null ) fl.release();
                                                if( fos != null ) fos.close();
                                    } catch(Exception y) {}
                        }
            }
}
 
There are three classes in the java.io package that can provide FileChannel objects: FileInputStream, FileOutputStream and RandomAccessFile.  In our example, we instantiate a FileOutputStream on the “lock.lock” file and call getChannel() to get the FileChannel object.  There are several methods to get an exclusive lock on a FileChannel.  We use the FileChannel.tryLock() method in our example.  If tryLock() is unsuccessful, it returns null, else it returns the associated FileLock object.  In our example, we call tryLock() then test the return value for null.  If it is null, we sleep for one second then try again, in the while( null ) loop.  When the FileLock is not null, we have an exclusive lock and can update shared resources.  It is very important that we release the FileLock.  For this reason, I have an independent try/catch block within the finally block that calls the release() method on the FileLock object.

I use a separate file, “lock.lock” for the FileLock, separate from the shared file that applications update, “central.log”.  We could alternatively establish the FileLock on the shared file.  I use a separate file for locking for a couple reasons: it makes the lock file job more obvious and explicit, and occasionally the object I’m sharing is not a file.  I have had to implement this style locking to accommodate a dedicated, single-user port: for example a serial port server, or in one bizarre situation, an FTP client that required a fixed port (per firewall rules), like this:
//package org.apache.commons.net.ftp;
package dac;
import org.apache.commons.net.ftp.*;
import org.apache.commons.net.ftp.parser.*;

public class FTPClient extends FTP {
           
            private int getActivePort() {
                    if( true ) return 11111; // requires ip / port filter permission
 
Let me mention the limitation of this file locking strategy.  It does not work between JVMs running on different OS architectures.  If all your JVMs are running on Windows or all your JVMs are running on UNIX, there is no problem; however, you cannot obtain an exclusive lock on a file from a Windows computer and have that lock observed on a UNIX computer, nor vice versa.  On the bright side, you can have a JVM on a Windows computer get an exclusive lock on a file that resides on a UNIX computer, and other Windows JVMs will observe the lock.  The same is true for JVMs on UNIX obtaining exclusive locks on files that reside on Windows.

There is also a risk that while a file is locked, the JVM may unexpectedly quit without completing the finally block and releasing the lock.  And there is a risk that if the locked file is on a remote computer, there may be a network failure, or the locking computer may unexpectedly reboot; which could leave the file in a locked state.  A key to avoiding this scenario is to only keep a file locked for the minimum time required.  Lock it, do your work, unlock it ASAP.
Thus ends part one of this story, Java Synchronization Across JVMs.  In part two, I will discuss flag-passing synchronization.

The complete code follows:

package dac;

import java.io.FileOutputStream;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.channels.FileLock;
 
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
 
public class SynchServlet extends HttpServlet {
            private static final long serialVersionUID = 1L;
            private static PrintWriter fileOut; // Synchronize use of this!
            private static int doGetSuccessCounter = 0;
            private static Object synchObject = new Object();
 
            @Override
            public void init(ServletConfig config) throws ServletException {
                        super.init(config);
                        try {
                                    centralLog("Starting SynchServlet");
                                    fileOut = new PrintWriter("logfile.log");
                        } catch (Exception x) {
                                    throw new ServletException(x.toString());
                        }
            }
 
            public void destroy(ServletConfig config) {
                        fileOut.close();
            }
 
            @Override
            public void doGet(HttpServletRequest req, HttpServletResponse res)
                                    throws ServletException, IOException {
                        try {
                                    synchronized (synchObject) {
                                                doGetSuccessCounter++;
                                    }
                                    logWrite("Using doPost()");
                                    // …

                                    synchronized (synchObject) {
                                                logWrite("There are " + doGetSuccessCounter
                                                                        + " doGet() successes!");
                                                System.out.println("There are " + doGetSuccessCounter
                                                                        + " doGet() successes!");
                                    }
                        } catch (Exception x) {
                                    synchronized (synchObject) {
                                                doGetSuccessCounter--;
                                    }
                                    throw new ServletException(x.toString());
                        }
            }
 
            public void altGet(HttpServletRequest req, HttpServletResponse res)
                                    throws ServletException, IOException {
                        try {
                                    synchronized (SynchServlet.synchObject) {
                                                SynchServlet.doGetSuccessCounter++;
                                    }
                                    SynchServlet.logWrite("Some other message");
                                    // …

                                    synchronized (synchObject) {
                                                logWrite("There are " + doGetSuccessCounter
                                                                        + " doGet() successes!");
                                                System.out.println("There are " + doGetSuccessCounter
                                                                        + " doGet() successes!");
                                    }
                        } catch (Exception x) {
                                    synchronized (synchObject) {
                                                doGetSuccessCounter--;
                                    }
                                    throw new ServletException(x.toString());
                        }
            }
 
            @Override
            public void doPost(HttpServletRequest req, HttpServletResponse res)
                                    throws ServletException, IOException {
                        logWrite("Using doPost()");
                        // …

            }
 
            // synchronized instance method, not class method (static)
            private static synchronized void logWrite(String logEntry) {
                        fileOut.println(logEntry);
            }
 
            private static void centralLog(String message) throws ServletException {
                        // Need something external to this JVM to test singularity
                        FileOutputStream fos = null;
                        FileLock fl = null;
                        PrintWriter centralOut = null;
                        try {
                                    fos = new FileOutputStream("\\\\server\\dir\\lock.lock");
                                    fl = fos.getChannel().tryLock();
                                    // Null when can't get exclusive lock on file
                                    while (fl == null) {
                                                try {
                                                            Thread.sleep(1000);
                                                            fl = fos.getChannel().tryLock();
                                                } catch (Exception v) {}
                                    }
                                    // At this point, I have exclusive lock on file!
                                    centralOut = new PrintWriter("\\\\server\\dir\\central.log");
                                    centralOut.println(message);
                        } catch (Exception x) {
                                    throw new ServletException(x.toString());
                        } finally {
                                    try {
                                                if (centralOut != null) centralOut.close();
                                    } catch (Exception y) {}
                                    try {
                                                if (fl != null) fl.release();
                                                if (fos != null) fos.close();
                                    } catch (Exception y) {}
                        }
            }
}