Common Architectures


Scalability: Scalability is the ability to economically support the required quality of service as the load increases.


Two types: Vertical and Horizontal


Achieved by adding capacity (memory, CPUs, etc.) to existing servers.

Requires few to no changes to the architecture of a system.

Increases: Capacity, Manageability

Decreases: Reliability, Availability (single failure is more likely to lead to system failure)

Vertical scalability is usually cheaper than horizontal scalability.

J2EE supports vertical scaling because of automatic lifecycle management. Adding more capacity to a server allows it to manage more components (EJBs, etc.).



Achieved by adding servers to the system.

Increases the complexity of the system architecture.

Increases: Reliability, Availability, Capacity, Performance (depends on load balancing), Flexibility

Decreases: Manageability (more elements in the physical architecture)

J2EE supports horizontal scaling because the container and server handle clustering and load balancing.


Availability and reliability are obtained through scalability. Scalability affects capacity. The more scalable the system is the more capacity it can support. This must be traded-off against the complexity & manageability costs.



How related is this to Flexibility? Flexibility is the ability to change the architecture to meet new requirements in a cost-efficient manner. A flexible system should be more maintainable in the face of changes to the environment and/or to the application itself.


While maintainability is somewhat related to availability, there are specific issues that need to be considered when deploying a topology that is maintainable. In fact some maintainability factors are at cross purposes to availability. For instance, ease of maintainability would dictate that one minimize the number of application server instances in order to facilitate online software upgrades. Taken to the extreme, this would result in a single application server instance, which of course would not provide a high availability solution. In many cases it is also possible that a single application server instance would not provide the required.


Flexibility improves: Availability, Reliability, Scalability
Flexibility slightly decreases: Performance, Manageability

Flexibility is achieved via code that can be distributed across servers with load balancing that prevents one system from being overburdened. The use of a multi-tier architecture also helps achieve flexibility.


The ability to ensure the integrity and consistency of the application and all of its transactions.

You increase reliability through the use of horizontal scalability, i.e., by adding more servers. This only works up to a certain point, though. When you increase reliability you increase availability.



Throughput, while related to performance, more precisely involves the creation of some number of application server instances (clones) in order to increase the number of concurrent transactions that can be accommodated. As with performance, the application server instances can be added through vertical and/or horizontal scaling.


Availability is about assuring that services are available to the required number of users for the required proportion of time. Availability requires that the topology provide some degree of process redundancy in order to eliminate single points of failure. While vertical scalability can provide this by creating multiple processes, the physical machine then becomes a single point of failure. For this reason a high-availability topology typically involves horizontal scaling across multiple machines.


Hardware-based high availability: By providing both vertical and horizontal scalability the WebSphere Application Server runtime architecture eliminates a given application server process as a single point of failure. In fact the only single point of failure in the WebSphere runtime is the database server where the WebSphere administrative repository resides. It is on the database server that any hardware-based high availability (HA) solutions such as HACMP, Sun Cluster, or MC/ServiceGuard should be configured.


Ability to modify or add functionality without impacting the existing functionality. The key to an extensible design is to make an effective OO design. Extensibility pays the most towards the font end of a system.


Some rough guidelines:
More than 25 top-level classes will lead to problems
Every use case should be able to be implemented using domain model methods

J2EE supports extensibility because it is component-based and allows you to separate the roles of an app. JSPs can handle presentation. Servlets can handle routing, and EJBs can handle business logic.


Architectural performance is concerned with creating an architecture that forces end-to-end performance. The purpose of an architecture that ensures performance is to control expensive calls and to identify bottlenecks. If you know the boundaries of the various parts of the system, the technologies, and the capabilities of the technologies you can do a good job of controlling performance.


You want to minimize the number of network calls your distributed app makes – make a few “large” calls that get a lot of data vs. lots of calls that get small amounts of data.

Try to minimize process-to-process calls because they are expensive. Use resource pooling to reduce the number of expensive resources that need to be created like network connections, database connections, etc.


Performance involves minimizing the response time for a given transaction load. While a number of factors relating to the application design can affect this, adding additional resources in the following two ways, or a combination of both, can be used to good effect: Vertical scaling, which involves creating additional application server processes on a single physical machine in order to provide multiple thread pools, each corresponding to the JVM associated with each application server process. Horizontal scaling, which involves creating additional application server processes across multiple physical machines.

Manageability refers to the ability to manage a system to ensure the health of the system.

A single tier or monolithic app would be more manageable from a management perspective than a multi-tier system but this must be weighed against the possibility of a change rippling through a monolithic app.


A simple architecture may not be as flexible or available as a more complex system but the amount of effort required to keep the system up & functioning will be less.

A component-based architecture like J2EE offsets some of the manageability problems caused by a multi-tier system.


Security ensures that info is neither modified nor disclosed except in accordance with the security policy. Tradeoffs: personal privacy, ease of use, and expense.


A highly secure system is:
More costly
Harder to define and develop
Requires more watchdog activities


Principles of Security:

Identity – The user is correctly ID’d thru an authentication mechanism

Authority – The user can perform only allowed activities

Integrity – Data can only be modified in allowed ways

Privacy – Data is disclosed to authorized entities in authorized ways

Auditability – The system maintains logs of actions taken for later analysis


Scalability - the ability to support the required quality of service as the load increases

Maintainability - the ability to correct flaws in the existing functionality without impacting other components/systems

Reliability - the assurance of the integrity and consistency of the application and all of its transactions. Reliability spans from the OS to the Application to the service provided.

Availability - the assurance that a service/resource is always accessible

Extensibility - the ability to add/modify additional fuctionality without impacting existing functionality

Manageability - the ability to manage the system in order to ensure the continued health of a system with respect to scalability, reliability, availability, performance, and security.


The following table outlines some inherent properties of common architectures.



Single tier


Three- or Multi-tier


Very limited

Still limited, DB connections

Good. Resource pooling possible, hardware can be added if and when need arises.


Poor, tight coupling

Difficult, data access and business logic are mixed

Good, layers can be changed independently.


OK, Data consistency is simple because one storage




Poor, single point of failure

Poor, DB server is still single point of failure

Excellent with appropriate redundancy architecture.


Poor, tight coupling

Difficult, data access and business logic are mixed. Code is dependent on DB schema.

Good, layers can be changed independently.



Poor, DB Server can become bottleneck. Each client requires a connection, no pooling. High network bandwidth required, because all data has to travel to client.

Good. Because the system is scalable, performance can be influenced by choosing the right system components.

Bottlenecks can be removed at the relevant layer.


Relatively easy

Poor, complex client installation  and maintenance




Problematic, because client has too much control.

Good. Only certain services can be accessed, no direct database access.

Firewalls and authentication and authorization systems help to further control access.


No distribution

Green screen, dumb terminals

Two options: Combine Presentation with Business layer (Thick client), or some Business with DB layer (Thin client, use Stored Procs)

Improved, graphical user interface

Re-use difficult

More difficult to program because distribution, concurrency, transactions, security, resource management have to be understood and taken care of.

The EJB architecture helps to reduce this overhead.


Fault tolerance

A failure is a discrepancy between the desired or specified and the actual behaviour of a system. Failures are external symptoms of faults (defects).


Fault tolerance is the ability to prevent failures even when some of the system components have faults. The key to achieving fault tolerance is redundancy. Redundancy requires replication:


Hot backup: Extra, live copies (replicas) of an object are serving client requests and synchronise continuously with the primary object. If the primary object fails, one of the copies takes over.


Warm backup: Backup copies of an object run, but don’t serve clients. Synchronisation happens at certain, regular intervals. In case of a failure, a copy takes over from the last synchronisation point.


Cold backup: The primary object synchronises with stable storage in certain intervals. In the case of a failure, a new object is instantiated and reads the storage for set-up.

Fault handling includes service replication, fault detection, fault recovery, and fault notification.

Fault detection is often done by using heartbeats (server sends periodical signal to monitor) or polling (monitor checks server once in a while).

Fault handling in CORBA involves Replica Managers and Replica Service Agents.


Active Replication: Every replica handles requests and replies. Interceptor has to block extra calls to third objects and extra responses (same as hot backup).


Passive Replication: One primary replica handles requests and synchronises state with secondary replicas. Requests are also logged. For fail-over, a secondary replica becomes primary and processes all requests after the last synchronisation point (same as warm backup).

Miscellaneous concepts


DNS Round Robin: Spreading incoming IP packets among a number of DNS addresses equally. That means that each subsequent packet is sent to the next address in a list, until the end of the list is reached and the next packet is sent to the first address again. This is a simple load-balancing strategy. It does not take into account the actual load on the machines.


Data Access Objects: Encapsulate the data access logic for a session bean or other component in a separate class. If you provide a common interface, multiple data access implementations can be provided, e.g. for different database systems. They simplify maintenance of code, make the path to CMP easier, and can be automatically generated by sophisticated tools. DAO’s are a bit like EJB’s, but don’t offer distributed access and transaction control. If these features are not needed, they are easier to use. There is a standard proposal for DAO’s called JDO (Java Data Objects).


Legacy connectivity

Options for integrating legacy systems include CORBA, RMI-IIOP, JNI, and Messaging.



CORBA is a unifying standard for distributed object systems. CORBA is managed by OMG, and can be used with many platforms and languages. Disadvantages: Complex standard, slow-moving.

In a CORBA architecture, objects communicate through ORBs, using IIOP as the protocol.


Object Adaptor: Maps object references to implementations, activates object if necessary. Portable Object Adaptor (POA) now widely used.


Repositories: Interface and implementation repositories are used by ORBs.

IDL: Interfaces are defined in the OMG IDL and can be compiled to a concrete language interface, e.g. Java, C, C++,COBOL, Ada, Smalltalk. IDL-to-Java Mapping defines details for Java.


Invocation: Static invocation uses pre-compiled stubs and skeletons for a specific language and object. Dynamic invocation does not use stubs and skeletons, but discovers object and methods dynamically using the Dynamic invocation interface (DII) at the client and dynamic skeleton interface (DSI) at the server. There are similarities between DII and using COM’s IDispatch interface.


Corba Object Services (COS): Naming, Event (asynchronous messages), Object Transaction Service (OTS), Concurrency control, Security




Goal: Marry RMI and CORBA. Connect RMI clients to CORBA servers and vice versa.

Benefits: Greater re-usability, legacy integration, robust firewall navigation. In the future support for transaction and security contexts can be added (new EJB/IIOP standard).



RMI-IIOP works with CORBA 2.3 ORBs. Required specs: Objects-by-value (IIOP does not traditionally support pass-by-value) and Java-to-IDL.


Java-to-IDL mapping defines how RMI and IIOP work together and the necessary RMI restrictions that are known as RMI/IDL. The mapping enables Java-to-IDL compilers to be written that take a Java remote interface and produce a corresponding CORBA interface specification in IDL.



RMI Server


CORBA Server

RMI Client



Not possible




Restrictions apply

CORBA Client

Not possible




EJB 1.1 mandates RMI-IIOP API to be present. However, there may still be two kinds of EJB servers: CORBA based and proprietary. The latter do not use CORBA, but implement communication differently (not using IIOP).


Development changes

Narrowing: Direct cast does not work on IIOP, PortableRemoteObject.narrow has to be used. In RMI direct cast works because Stub class can be loaded dynamically over the net.


Two new packages for RMI-IIOP: javax.rmi (PortableRemoteObject) and javax.rmi.CORBA (internal) (Normal RMI package is java.rmi).


No distributed garbage collection: Manually unregistering is necessary via unexportObject method.

RMI-IIOP clients must use JNDI. RMI registries and COS Naming can be plugged into JNDI.


Tools: Generation of RMI-IIOP stubs and skeletons: “rmic - iiop". IDL-to-Java compiler: “idlj”. Java-to-

IDL-compiler: “rmi -idl”.


Note: Making object public through both JRMP and IIOP at the same time is possible.


Java IDL

Java IDL embeds Java into the CORBA world. It includes a new Java IDL API (org.omg.CORBA, org.omg.CosNaming) and tools including an IDL-to-Java compiler (idltojava) and an ORB.

The ORB includes the Java-to-IDL and Objects-by-Value specs (both mandated by CORBA 2.3 and necessary for RMI-IIOP connectivity). It does not define an interface repository, however, so no dynamic lookup of parameters is possible.


Sun recommendation: Java IDL should be used when accessing existing CORBA servers is the main purpose, whereas RMI-IIOP should be used when serving requests from CORBA clients is the main purpose.



Process: Write interface in IDL, compile to Java (results in interface and several classes). Use interface in client programming. Descend server from generated ImplBase class (aka ‘implementation skeleton’).

IDL-to-Java also generates ‘Stub’ class (client proxy), ‘Holder’ class (for out or inout parameters) and ‘Helper’ class (for narrowing and reading/writing).

ORB interface (org.omg.CORBA.ORB) and implementations (e.g. com.sun.CORBA.iiop.ORB). Getting an ORB: static ORB.init.

NamingContext in package org.omb.CosNaming.

Object references: Temporary references (through proxy) and long-lived ‘stringified’ interoperable object references (IOR).



Java Native Interface (JNI) is a standard for linking Java to native programs written in other languages like C and C++. A good way to integrate a legacy application that is not written in Java into a distributed system is to wrap the application using JNI and make it accessible through RMI.


Design Patterns


1) State the benefits of using design patterns.

Improves communication between designers by use of pattern names vs. the details of the patterns.

Captures experience of solving a type of problem.

Provide a way of reusing design.

Provide a mechanism for making designs more reusable.

Provides a mechanism for systematizing the reuse of things that have been seen before.

Can be used to teach good design.


2) From a list, select the most appropriate design pattern for a given scenario.

3) State the name of a GOF design pattern given the UML diagram and/or a brief description of the pattern’s functionality.

4) Select from a list benefits of a specified GOF pattern. Identify the GOF pattern associated with a specified J2EE feature.


Types of patterns
Creational: Involved with the process of object creation.
Structural: Deals with the composition of classes or objects.
Behavioral: Characterize the ways in which classes or objects interact and distribute responsibility.


Abstract Factory: (Creational)
Provide an interface for creating families of related or dependent objects (products) without specifying their concrete classes. J2EE technology uses this pattern for the EJB Home interface, which creates new EJB objects.


It isolates concrete classes.
It makes exchanging product families easy.
It promotes consistency among products.
Supporting new kinds of products is difficult.


Factory Method: (Creational)
Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.

J2EE technology uses this pattern for the EJB Home interface, which creates new EJB objects.

Eliminates the need to bind application-specific classes into your code.
Gives subclasses a hook for providing an extended version of an object being constructed.


Prototype: (Creational)
Specify the kinds of objects to create using a prototypical instance

create new objects by copying this prototype

Hides the concrete product classes from the client

Allows adding and removing products at run-time

Can specify new object by varying the values of an object’s variables.


Singleton: (Creational)

Ensure a class only has one instance, and provide a global point of access to it.

Provides controlled access to a sole instance of a class.
It avoids polluting the name space with global variables that store sole instances.
Permits a variable number of instances.
More flexible than static methods.


Adapter: (Structural)
Convert the interface of a class into another interface clients expect.

Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces.

Permits you to use an existing class that has an interface that does not match the one you need.
You want to create a reusable class that cooperates with unrelated or unforeseen classes.
The Object Adapter pattern can be used when you need to use several existing subclasses but it is impractical to adapt their interface by subclassing every one.


Bridge: (Structural)
Decouple an abstraction from its implementation so that the two can vary independently.

Use when you want to avoid a permanent binding between an abstraction and its implementation.
Use when both the abstractions and the implementations should be extensible by subclassing.
Changes in the implementation should not impact clients.


Composite: (Structural)
Compose objects into tree structures to represent whole-part hierarchies.

Composite lets clients treat individual objects and compositions of objects uniformly.

Pattern defines hierarchies consisting of primitive objects and composite objects.
Allows clients to treat composite structures and individual objects uniformly.
Makes it easier to add new kinds of components.
A disadvantage is that it can make a design overly general.


Flyweight: (Structural)
Use sharing to support large numbers of fine-grained objects efficiently.


Decorator: (Structural)
Attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality. In J2EE technology, The EJB object is a decorator for the bean because the bean’s functionality is expanded to include remote behavior.


Proxy: (Structural)
Provide a surrogate or placeholder for another object to control access to it.

The EJB’s remote interface acts as a proxy for the bean. Proxy is also used in RMI.


Façade: (Structural)
Provide a unified interface to a set of interfaces in a subsystem. Façade defines a higher-level interface that makes the subsystem easier to use. Can use to achieve runtime binding without using inheritance. The Session Entity Façade pattern is a derivation of Façade that uses a Session bean as a façade for multiple Entity beans.


Command: (Behavioral)
Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations. The Command pattern can be used to provide pluggable behavior, which enforces client access to services.


Strategy: (Behavioral)
Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it. The Strategy pattern can be used to provide pluggable behavior, which enforces client access to services.


Chain of Responsibility: (Behavioral)
Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it.

Interpreter: (Behavioral)
Given a language, define a representation for its grammar along with an interpreter that uses the representation to interpret sentences in the language.


Iterator: (Behavioral)
Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation.


Mediator: (Behavioral)
Define an object that encapsulates how a set of objects interacts. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently.


Memento: (Behavioral)
Without violating encapsulation, capture and externalize an object’s internal state so that the object can be restored to this state later.


Observer: (Behavioral)
Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.


State: (Behavioral)
Allow an object to alter its behavior when its internal state changes. The object will appear to change its class.


Template Method: (Behavioral)
Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.


Visitor: (Behavioral)
Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.





Factory Method


Often times in a particular design we have an abstract base class, and then use that to inherit an interface for various derived implementations of the class. Consider the design problem of a Base class Application that uses another class Document. Now to use Application, you derive a subclass MyApplication off of it, which contains the implementation details. You also derive MyDocument off of Documnent to provide the implementation of Document. Now, how can you have MyApplication use MyDocument without having some outside code that knows to instantiate a MyDocument class for use with MyApplication?


The Factory method provides a simple and powerful solution to this problem. By adding a virtual method to Application, called CreateDocument(), you can have the implementation of MyApplication know that it uses MyDocument, and create it. This results in the code using Application, and Document not having to know which exact derived class is being used. This method is called the "factory method". A factory method defines "an interface for creating an object, but let[s] subclasses decide which class to instantiate." It can "defer instantiation to subclasses."


Factory method provides a "hook" that clients can use to subclass existing classes and have them used with a minimum of code change. It also can be used in a "parallel" hierarchy, that is with multiple classes on the same level, you need not know which subclass of the base class you are using to create a manipulation object on it, you simply call its creation function, and due to the fact that all classes have some base class, they all have a know interface.

One potential disadvantage of the factory method is the requirement that the engineer subclass the creator class to instantiate a concrete class



Abstract Factory


Consider the problem of implementation of a system of classes. For example, a graphics system implemented using either OpenGL, or Direct3D. You want your system not to have to know about which system it is using, only what it can do, and how to do it. You would like to be able to create framebuffers, and 3D objects, and manipulate them without hard coding implementation. So, how can this be done?


An abstract factory is the solution. You create an abstract class that contains creation functions for all classes in a related group, then you derive concrete classes that implement creation of the particular classes in the specific implementation that you desire. To refer to the example, you would create the abstract factory called GraphicsFactory, then derive two concrete classes off of it, Direct3Dfactory, and OpenGLFactory. The interface for the classes includes functions that create derived classes for FrameBuffer.


One consequence, which I think is a good one is the isolation of concrete classes from the client code. By using an abstract factory, you restrict the client code to using only the calls of the abstract base classes.

Another consequence is that it makes changing whole families of implementation very easy, just make the new implementation hierarchy and slip it in with a derived factory.





Sometimes in creating a system of classes, you want particular classes only to have one instance and you want to insure that no one can instantiate more than one. For example, you only want to have one file system, to have more than one would interfere with us. So, how can we do this and still provide a global point of access for that one instance?


The solution is a singleton class. A singleton class has a static member variable which is a pointer to the one instance, and a static method which will return this instance, or create and return if none exists. All clients access a pointer to the class by calling the static method to obtain an instance. For example, a class FileSystem, containing FileSystem*_instance;, and FileSystem::Instance(). Instance(), returns _instance if it is not NULL, otherwise it creates a new FileSystem object, sets _instance to it, and returns _instance.


A singleton object is a good pattern because it provides a global point of access to a class without adding a variable to the global namespace.

It also can be used for run-time class refinement. Picture a singleton object that changes which subclass its pointer instantiates based on run-time conditions.

You can also use the singleton instance to restrict the number of instances to some non one number, like four.





Imagine a tokenizer that can read in different formats of text. Its functionality isn't based on the format it is reading in, however, the format does mean some different implementation. Now, how can you provide a common interface, to the rest of the code, and convert from the different formats without having to know the format in every place you use the data?


A builder can provide the solution. A builder is an abstract class that can be used for object construction, where the details of the construction are not known to the client code. For our example, a builder class could contain the methods necessary to tokenize a string, given a format type, and the builder is instantiated based on that type. The client code then always calls to the builder to convert its code, without having to know which format type it is using.


A builder abstracts the construction implementation details of a class type.

It also encapsulates the way in which objects are constructed improving the modularity of a system.

A builder lets you have finer control over the creation process, by letting a builder class have multiple methods that are called in a sequence to create an object.





The problem solved by a prototype doesn't seem to be a common one, yet it could be useful. Consider the case of object initialization, where multiple objects are being created an initialized to a certain state, based on run-time conditions. Normal procedure would have the clients create each object then initialize it to the appropriate state. But, is there a better way?

Another problem is that given a set of subclasses, each with different implementations, how can you create a copy of an object without knowing its nternal details?


A prototype simple adds a method to the class in question that creates a copy of itself. So, in the case above, the client code would simple create one class and initialize it, then it would call the cloning method of the class for as many copies that it wants. There by saving a lot of manual initialization.

By combining a prototype with an abstract base class and multiple subclasses, the client can create copies of a class without having to know its internal details.


With a prototype class and a prototype factory, you can reduce the object hierarchy while obtaining similar functionality. At runtime, the prototype factory can be instantiated by passing in the class prototypes it controls. When having the factory make an object, it simply clones one of the prototypes that it has internally.







Imagine an existing system that utilizes a certain class for some functionality, then imagine that a new library is released providing the same functionality, only better. The problem is the interface on the new class is different. What to do?


An adapter class provides a common interface over top of a class with a unique interface. There are two types of adapter, class adapters and object adapters. A class adapter inherits multiply from a target base class, which provides the interface, and privately from an adaptee class which provides the implementation. An object adapter provides the interface change by object composition. It knows the methods to call into the adaptee.


For the class adapter, the adaptee is reached via inheritance, so some of the adaptee's methods can be overridden. However, the adapter is committed to the adaptee class.

For the object adapter, the adaptee must be reached via pointer indirection, but this allows for one adapter to work for many adaptees. The object adapter has a harder time overriding the functionality of the adaptee, since it requires subclassing the adaptee and switching pointers.





In some contexts we want to defer the actual creation of an object until it is used, however, we need to have a reference to it in advance. Consider the case of a document that contains some images. We want the document to load quickly when the user opens it, however, the images take a long time to read in from disk. How can we resolve this apparently difficult design problem?


The answer is the proxy. A proxy is a parallel subclass that defers the creation of a subclass until a meaningful method is called on it. For example, in the case above, if we had an image proxy, we could instantiate that, and the image would not be loaded until the proxy was told to draw, thus images that were not on screen would not slow down the loading of the document. A proxy knows which subclass it represents by taking a reference at creation time that can be used to actually create the subclass. Then when one of the methods on the proxy is called it creates the subclass from the reference if needed, then calls the same method on the subclass.


A proxy introduces a level of abstraction to an object that can have all kinds of benefits. We have already seen the benefit for load on demand. It can do all sorts of other optimizations to the data, including wrapping up data testing and performing "copy-on-write". "Copying a large and complex object can be an expensive operation. If the copy is never modified, then there is no need to incur this cost." So, the proxy can not initially copy an object, then keep track, and actually implement the copy if the object is changed.





Sometimes in creating a hierarchy of different, but like classes, you come across a case where one of the classes could be better implemented using some of the other existing classes of the same abstract base as aggregates. One example would be in a graphics system. Consider abstract base class as Graphic and some subclasses, called Rectangle, and Line. Now consider that you would like to add a new subclass called Picture. Now Picture can be best implemented using a list of other primitives. How can this be accomplished?


A Composite implements a subclass that simply contains a list of classes that it is composed of. For our example above, the Picture has the same methods that all the other subclasses have, only the composite class simply iterates through the classes it contains and calls the method. For example, all classes may have the method Draw(), now the composite just calls the Draw() method on all its aggregate classes.


A composite can enable a complex tree hierarchy, since in any place where you can take a primitive subclass, you can take a composite. It also enables the client not to have to know the nature of the classes, and thus makes it simpler. However, it does require that all classes in the hierarchy have methods for adding and removing aggregates, even if they aren't composites, and it makes the design a little general.





Consider a complicated subsystem, including many different classes with many different interfaces. How can you give the user access to the system, without them having to know all the different classes?


The answer is a façade, it provides an interface to a system of classes. It provides methods, which use the classes in a system, and hides how it uses them from the client. The façade, instantiates and uses the classes, while the client need only instantiate the façade, and know its interface.


The façade simplifies the system to the client, and loosens the coupling between the client and the system. This allows for changes to the system without as many changes to the client code. It also allows for complex use by not actually preventing the client from instantiating the classes in the system if they want to.





A complex object-oriented design would like to use classes for everything, however, using too many small classes that all combine with each other to form some complex functionality can be expensive, incurring much overhead. How can we use classes for all our objects, yet share them without penalty?


The flyweight solves this problem by implementing a sharing system of like functionality, while accepting extrinsic data to affect how it operates. Consider a paragraph of text, where each character is a class. It certainly sounds like a lot of overhead. Now consider each character as a shared flyweight that accepts a extrinsic formatting object. It reduces the overhead by sharing, yet maintains the full range of formatting. You can accomplish the sharing using a flyweight factory, where it accepts requests for flyweights based on key, and returns they flyweight object, creating it only if it doesn't exist already.


The more the flyweights are shared, the more they can benefit a design. They do incur some run-time processing cost, for transferring and list lookup of the flyweights, however, this is quickly offset as the sharing increases. The speed also depends on how the extrinsic data is obtained. It can slow the process. Overall, a good pattern for certain cases, involving heavy sharing of data.





Class hierarchy provides a way to implement different specific implementations by using an abstract base class, and deriving subclasses. This is often a perfect solution. However, it does have one problem. It binds one implementation to an abstraction permanently. What if more than one derived implementation had shared implementation? Normal class inheritance would require you to duplicate the implementation for both subclasses. What is an engineer to do?


The solution is the bridge. A bridge creates two class hierarchies, one containing the original class hierarchy, but the difference is there is another hierarchy that abstractly implements the shared functions. This allows you to inherit only those functions that are unique and share functions that you want to.


A bridge decouples the interface and the implementation. Thus you can have the implementation configured at run-time. A bridge also allows for improved extensibility since you can extend either of the two hierarchies.





Adding functionality to components in a hierarchy in easily accomplished by subclassing off of a base class and then implementing the added functionality. What if we don't always want the added functionality? Derived components are locked into their implemented functionality. Consider a text box; sometimes we want the box to have a border, and sometimes not. We could make two base classes one with and one without a border, but that is a waster of work, and duplication of labor. Not to mention the new behavior is static, not dynamic. How can we dynamically add functionality to a subclass without restricting it to having the functionality?


A decorator is a parallel subclass off of the same base class as the class it decorates. It takes a pointer to the base class as an initialization parameter. It adds functionality, then calls the same method on the class it contains. It is almost like a composite with only one component. Except it adds functionality unique to itself.


Since the decorator is derived off the same base class, the client need not know whether it is working with a decorated object or not, and the added functionality can be composed dynamically.








It is often tedious to implement many subclasses of a particular base class. It would be nice if there were a way to reference the existing structure, and only redefine some aspects of a class definition to make a new subclass.


A template is a very useful tool. It provides a way to replicate a base class in a sub class and only change what you want to without having to enter a lot of redundant code. When you use a template, you pass in parameters that configure your subclass to match you needs, without requiring you to completely define the new subclass. If you have ever used STL, then you are familiar with templates. STL has many predefined templates that can be used to make many useful subclasses. For example, in STL for C++ there is a template called list. You can make a list of floating point numbers with the line std::list<float>. This is quite a lot less trouble than creating your own implementation of a list class.


To appropriately use a template, you must understand which methods can be overridden, and which methods must be overridden.

A template can provide a very efficient means of code reuse. If you can express your common functionality in a general way such that it can be templatized, then you have already increased code reuse.





When working with lists it is often nice not to have to know the internal details of how the list is implemented. We may have different ways to traverse the list and do not want to clutter each list class with all the different ways to traverse it. How can we do this?


An iterator is a class that is separate from a list class that it works on. The iterator knows how the list is implemented and knows how to traverse it in a specific way. The list class has methods on it that will instantiate a object of the appropriate iterator class for use in traverse itself. If you are familiar with STL(Standard Template Library), then you are familiar with the concept.


An iterator keeps track of its own state of traversal in a list, and as such multiple iterators can be active in different places without having to implement support for this in the list class. As well, iterators can implement any kind of traversal; forward, backward, etc. An iterator is a very powerful tool for use with lists, you can use an iterator specify a position in a list for use with higher level manipulation routines, such as sorting.





How can we gain the ability to use different algorithms for like functionality, having the clients contain the code that implements the algorithms? How can we facilitate the extensibility of an algorithm set? How can we use different algorithms without complex conditionals?


The answer is the strategy. A strategy is a collation of algorithyms each implemented in its own concrete subclass. All the algorithms implement the same result, but may do so differently. Thus a separate class can access the functionality without having to contain it internally.


Can implement a range of algorithms each encapsulated and easy to modify.

This provides a way to implement similar functionality without having to subclass the general class, only the algorithm.

One problem is that in order to determine which algorithm to use, the context class must know something about each algorithm.

Strategies increase the number of object in a system, but this can be alleviated by sharing the algorithm classes.



Chain of Responsibility


Consider the case where you have a structural hierarchy of object that is not necessarily subclasses from the same base class. Now lets say there is a piece of functionality that they all can implement, but it is not statically determinate which should handle the case. Now I know that sounds a little complicated, but picture a context sensitive help system. Each piece of the display may have a piece of text that is wants to display when ever the cursor is over top of it, or it may not. In the case where a piece doesn't want to handle it, you want the class that contains it to display a more general help string. You also want this system to be dynamic at run-time. How can this be done?


The Chain of responsibility can solve this problem. The chain of responsibility requires that each of the classes involved inherit from a base class, let's say HelpHandler. Now when they are instantiated, the objects all initialize their parent class with a successor, which is a reference to another object that is also derived from the base class. When the time comes to call the method that you want to pass up the chain, the first object picked is the most appropriate one. If the object doesn't want (or can't handle) the request, it calls the BaseClass::Method() which calls the method for the successor, and so on, until one of the objects handles the request.


The chain of responsibility makes it so the client doesn't need know who is in fact handling the request, only that the request is being handled. As well, you can change the hierarchy at run-time to provide for a changing environment. However, it should be noted that in this system all participants in the chain could refuse to handle the request, and the client would not know.





How can we issue request to other objects in a system, without having to know who will receive the request, or even what the request is? This can be useful for implementing functionality such as a menu system, where each menu item has a command that it executes, but it doesn't want to know what it is, it only needs to trigger the command at the right time. How can this be done?


The command pattern is simple the use of objects as commands in a system. The command class has a virtual member called Execute(). In our above example, the menu item could take a command object as an initialization parameter, and when the application runs, then menu item gets a command, the when the menu item is clicked it calls Execute() on the command.


Commands are very powerful, and can support a lot of different functionality. For example, if the command registered itself with some global undo class when it executes, then simply by reversing each command in reverse order (I smell a stack…) we can undo any sequence of commands provided every command class can be undone. Command object separates the object that triggers an operation from the one that executes the operation. By using the Composite pattern you can extend a command to execute multiple commands. And since all commands derive from the command base class, adding new commands is easy.

I myself have used this pattern to implement a dynamic console in a game. At run-time, sections of code register commands into the console. The console then accepts textual input, and pattern matches against a Name() method of each command, and upon finding a match it calls Execute() on the command. I had implemented this before discovering Design Patterns, so could have saved myself a lot of work in knowing the intricacies of this pattern if I had read the book.





When an object has different internal states, how can we implement the states without complicated boolean logic inside the class?


The answer is the state pattern. In this pattern, the context object contains a state object, which represents the current state that the context object is in. It does this by implementing all the class methods of the context object. Whenever a context object receives a method call, it passes that call on to its state to execute the functionality. The context object changes the state object as needed to correctly relate to its internal state.


Encapsulates specific functionality for specific states in its own class

Provides an easy way to extend the functionality of a context class. Just add more states.

State objects can be shared in certain cases to increase reuse.





Sometimes we have multiple operations that need to be implemented on a structure of objects. Traditionally we would implement each of the objects to derive of a base class with each of the operations, and then override the operations to implement them. This can be a problem if the operations are different in nature, as well as if the number of operations exceeds the number of different subclasses in the tree. How can we overcome this problem?


The solution is the visitor pattern. The visitor pattern has each object class in the structure only has one method called AcceptVisitor() which takes a visitor class as a parameter. A visitor is an implementation of a particular operation, and has methods for each of the class types in the structure. The object in the structure calls its appropriate method on the visitor. For example, if a tree structure contained two types of classes, Foo and Fib. Then when we wanted to perform a function on them we pass down a visitor that implements the function, and nodes of type Foo call the VisitedFoo() function on the visitor, which implements the function for objects of type Foo


It is easy to add operations, simply derive anew visitor class.

However, it is hard to add new class types to the structure, since you need to change both the base class and all subclasses of the visitors for the change.





Partitioning a system can increase reusability. However, if as the number of classes increases then number of interconnections also increase that reduces the reusability. This is because the classes more and more rely on the existence of the others. How can we favor weak coupling and increase reusability?


A mediator provides and interface, and a level of indirection between dependant classes. A mediator, for instance, can sit between a list box, and a text entry field. When the list box changes, it tells the mediator, who then gets the text from the list box and passes it to the text entry box. So, the text entry box does not need to know about the list box, and does not have any interdependency with it. This is implemented by having both the text entry and list boxes derive off of a colleague base class which has a method Changed() which tells the mediator whenever the box changes. Then the mediator determines what needs to be done to maintain the system functioning properly.


Changing how a system of objects interacts requires only subclassing the mediator, and the colleague classes remain unchanged. As well, the mediator provides the one place that controls the object, and determines how they communicate. A mediator also reduces the number of interface methods, as context specific methods are not required.





When a user interacts with a program, one of the important features is sometimes the ability to undo what was just done. This is common in many applications from graphics to word processing. In fact I just used it now in Microsoft Word. Often it is the case that the operation just executed cannot be reversed appropriately, there is a loss of original information or some other reason. How can we maintain a system that CAN reverse all operations?


The answer is the memento. A memento is simply a class that contains all information needed to recreate a specific state of an object. When an operation is performed the originator object performs the operation passes back a memento as a side effect. That memento is then stored, and then in the event that the system wishes to return the originator to the old state, then memento is simply passed back to the originator, who adjusts its internal state accordingly. At no time does anyone but the originator know the contents of the memento.


Since only the originator know the contents of the memento, no one has to know the implementation details of the originator.

A caretaker object is required to create, store and delete mementos.

Mementos could be very expensive in terms of overhead if the internal state of the originator is complex. So, in some cases mementos might not be a valid solution.





In a complex system you want "maintain consistency between related objects". This is not a trivial task, since coupling the objects together reduces reuse of each component. How can you maintain consistency, yet also maintain the loose coupling?


If you make the related classes observers, who look at a common repository of data for their state, you can maintain consistency and no observer knows about the others. For example, in a spreadsheet application, the observers could be the numerical cells, and the graphs, and the pie charts. They all look to the data (subject) to determine how they should look. The subject has methods to attach, detach, and update the observers in the event that its state changes.


Since all observers are derived off of a abstract base class, then all the subject knows is that it is being observed, not by whom.

When a subject changes, it tells all the observers, thus it can broadcast changes.

One problem is that in this system the no one can know the cost of changing the subject. It could be as simple as incrementing a pointer, or as complex as drawing the entire screen over again.





"If a particular kind of problem occurs often enough, then in might be worthwhile to express [the] instances of a problem as sentences in a simple language." The example is pattern matching using regular expressions. There is a simple language defined, regular expressions have a well defined syntax.


An interpreter simply implements the hierarchy of the BNF specification of a language. An interpreter defines an abstract expression and concrete subclasses that represent explicit types of expressions. Each expression has some parameters, which can be expressions, and the current expression contains some list of other expressions. By having the nesting ability of containing any other expression you can represent a BNF specification very well.


It is easy to create and extend simple grammars, but complex grammars are hard to maintain.



Additional References:

EJB (Enterprise JavaBeans)



1. List the required classes/interfaces that must be provided for an EJB.
2. Distinguish stateful and stateless Session beans
3. Distinguish Session and Entity beans
4. Recognize appropriate uses for Entity, Stateful Session, and Stateless Session beans
5. State benefits and costs of Container Managed Persistence
6. State the transactional behavior in a given scenario for an enterprise bean method with a specified transactional attribute as defined in the deployment descriptor
7. Given a requirement specification detailing security and flexibility needs, identify architectures that would fulfill those requirements
8. Identify costs and benefits of using an intermediate data-access object between an entity bean and the data resource
9. State the benefits of bean pooling in an EJB container
10. State the benefits of passivation in an EJB container
11. State the benefit of monitoring of resources in an EJB container
12. Explain how the EJB container does lifecycle management and has the capability to increase scalability



Java Enterprise In a Nutshell, Flanagan, Farley, Crawford, & Magnusson, O’Reilly
 Mastering Enterprise JavaBeans, Ed Roman,
EJB Enterprise JavaBeans 3rd Edition, Monson-Haefel


OBJECTIVE #1: List the required classes/interfaces that must be provided for an EJB.



For all types of EJBs, you need to provide three Java interfaces/classes to fully describe the EJB to an EJB container:


1. The HOME INTERFACE which takes the form:

import javax.ejb.*;
import java.rmi.RemoteException;

public interface MyHomeInterface extends EJBHome
public MyRemoteInterface create()
throws RemoteException;


2. The REMOTE INTERFACE which takes the form:

import javax.ejb.*;
import java.rmi.RemoteException;

public interface MyRemoteInterface extends EJBObject
// business method definitions – all of which can throw
// a RemoteException


3. The BEAN CLASS itself, which takes two forms:

For Session Beans:

import javax.ejb.*;
import java.rmi.RemoteException;

public class MyBean implements SessionBean
// required methods
public void ejbCreate() {}
public void ejbRemove() {}
public void ejbActivate() {}
public void ejbPassivate() {}
public void setSessionContext() {}

// business method implementations


For Entity Beans:

import javax.ejb.*;
import java.rmi.RemoteException;

public class MyBean implements EntityBean
// entity bean methods – see EJNS p.199 for details


4. Additionally, for ENTITY BEANS, you must also provide a PRIMARY KEY CLASS, which takes the form:

import javax.ejb.*;

public class MyBeanPK implements
public String someFieldToUseAsPK;
public MyBeanPK() { someFieldToUseAsPK = null; }
public MyBeanPK(String s) { someFieldToUseAsPK = s; }


The primary key can only be guaranteed to return the same entity if it is used within the container that produced the key. If two EJB objects have the same home and same PK they are considered identical.


A primary key must implement the Serializable interface.

§                     EJB 1.1 requires that the hashCode and equals methods be overridden.

§                     EJB 1.1 allows the PK to be defined by the bean developer or its definition can be deferred until deployment.

§                     A default (no-arguments) PK constructor is required for CMP. When a new bean is created the container automatically instantiates the PK, and populates it from the bean class’s CMP fields.

§                     All fields in the PK class must be declared public.

§                     The String class and the wrapper classes for the primitive data types can also be used as PKs. In this case there is no explicit PK class. However, there must still be an identifiable primary key within the bean class itself.

§                     (There are some special rules for this primary key class when using container managed persistence. See EJNS p.197)

OBJECTIVE #2: Distinguish stateful and stateless Session beans



Session beans can be either stateless of stateful. Stateless Session beans do not maintain state across method calls. The Session bean returns to the same state after each successive call to one of its methods from a client. This makes a stateless Session bean poolable for performance and accessible from multiple clients (one at a time) without fear of conflict. Stateless Session beans are never passivated, but just destroyed (or removed from the pool) when no longer needed.


Stateless beans cannot participate in any transaction synchronization (e.g. implement interface SessionSynchronization). A stateless session bean is only dedicated to an EJB object for the duration of the method. Stateful Session beans maintain a state that can be accessed and changed across multiple client interactions with the bean’s methods. Stateful Session beans are accessed by only a single client and can be passivated by the container.


At deployment time, you indicate to the container whether a Session bean is stateful or stateless with an identifier in the deployment descriptor.


OBJECTIVE #3: Distinguish Session and Entity beans



There are two types of beans in the EJB specification, Session and Entity beans.

Session beans represent business logic, rules, and workflow. They perform some sort of work for the client that is calling them. Important features of Session beans:
1. Two types, stateful and stateless (see last objective above)
2. Never used by more than one client at a time
3. Lifetime is roughly the same as the period during which the client interacts with the bean


Entity beans represent data that is stored in a database or some persistent storage. Important features of Entity beans:

1. Persistent across client sessions and the lifetime of the server
2. Have a unique identity used for lookup
3. Multiple clients can access at once – the container manages this concurrency.
4. Persistence of the data the bean represents is either “container” or “bean” - managed

OBJECTIVE #4: Recognize appropriate uses for Entity, Stateful Session, and Stateless Session beans



Entity beans do not contain business process logic; they model data. Examples of persistent data include things like “account”, “customer”, “order line”, etc. Session beans model business processes such as performing a credit card verification, reserving a room, etc. Some processes require a single method call (stateless) to complete, while other processes may require repeated calls that require storing state information across these calls (stateful).


An example of a stateful process would be a bean that tracks an on-line shopper’s cart, the contents of which must be maintained across multiple calls to the bean. Code that represents a bank teller interacting with a customer might also need to track account balance or credit limit across multiple banking transactions in a single interaction with the customer.


Recognizing the type of bean to use begins by asking, “is this a process or a persistent thing I’m working with” (session versus entity) and if it is a process, asking “is this a single method call with no need to reference prior state or do I need multiple transactions with a stored state to accomplish this process?” (stateless versus stateful).


OBJECTIVE #5: State benefits and costs of Container Managed Persistence



Container Managed Persistence (CMP) refers to an EJB container providing automatic persistence for Entity beans. The container performs all aspects of data access for the entity bean, including saving, loading, and finding components of the data.


Stated Benefits: (nets out to: rapid application dev and portability)

Database Independence - No need to hard code against a specific database API to accomplish these things
Code Reduction – CMP performs all data access logic without the need to write any JDBC code, reducing the amount of code needed and hence development time for the bean.
Enhances Software Quality – JDBC code is not “type-safe” and is frequently buggy because it cannot be verified until runtime. The developer who wrote the container likely writes better data access code than you do and usually provides contained tools that allow detecting of errors at compile time. The better the container, the more your Entity bean code improves in quality.

Real Life Costs: (nets out to: limits flexibility, portability not 100%)


Not all containers require you to write absolutely zero persistence code. Most containers at least require you to navigate a wizard to specify persistent fields and that you specify the logic for your finder methods.


If there is a bug, it is hard to see what database operations the container is actually executing in order to troubleshoot the problem. Some containers may not support mapping of complex field types to underlying storage. Example: your entity bean class has a vector as a persistent member field. You may have to convert this vector to some other type that the container can persist for you.


Portability across databases is not yet perfect. Specifying persistence semantics is not standardized yet and what BEA requires you to do to map your objects to relational fields may be inconsistent with another container vendor. If mapping your objects to relational tables is not 1:1 or you need complicated SQL joins in finder methods, some containers lack the ability to handle this.

May require sophisticated mapping tools to define how the bean’s fields map to the database.

EJB 1.1 allows the CMP fields to include remote references to EJB objects (remote interface types) and EJB homes (home interface types).


EJB 1.0 only allowed Java primitives and serializable types.

Data Aliasing

Eager State Loading

OBJECTIVE #6: State the transactional behavior in a given scenario for an enterprise bean method with a specified transactional attribute as defined in the deployment descriptor



Transactional attributes of bean methods are specified in the deployment descriptor. Here are the attributes and what they mean (MEJB p.706): TX_BEAN_MANAGED The bean programmatically controls it’s own tx. EJB 1.0 Only boundaries via JTA.


NotSupported The bean CANNOT be involved in a
transaction at all. When a bean method is called, any existing tx is suspended.


Required The bean must ALWAYS run in a transaction. If a tx is already running, the bean joins in that tx. If not, the container starts a tx for you.


RequiresNew The bean must ALWAYS run in a NEW transaction. Any current tx is suspended.

Supports If a transaction is underway, the bean joins in that tx, otherwise runs with no tx at all.

Mandatory Mandates that a transaction must already be running when the bean method is called or an exception is thrown back to the caller.


Never If a tx is underway the bean will throw a EJB 1.1 Only RemoteException, otherwise the method Runs normally without a tx.


OBJECTIVE #7: Given a requirement specification detailing security and flexibility needs, identify architectures that would fulfill those requirements.



Security: EJB servers can provide up to three types of security service:
1. Authentication – validates the identity of the user – usually via JNDI.
A client using JNDI can provide authenticating information using the JNDI API to access a server or resources in the server.


2. Access Control – validates what the user can and cannot do. EJB 1.1 changed authorization to be based on types rather than Identity types and to be role-driven rather than method-driven.


Deployment descriptors include tags that declare which logical roles are allowed to access which bean methods at runtime. Security roles are mapped to real-world user groups and users when the bean is deployed. Once the security-role tags are declared they can be associated with methods in the bean using method-permission tags.


3. Secure Communication – client server communication channels can be secured by physical isolation or encrypting the conversation - usually via SSL. Encryption usually involves the exchange of keys to decode the encrypted data.


Most EJB servers address Secure Communications through SSL encryption and also provide some mechanism such as JNDI for Authentication. The EJB 1.1 Specification only addresses Access Control on the server side.

For a complete discussion of J2EE security services and topics like Role-Driven versus Method-Driven access control, see EJB Chap 3: Security (p.71).


OBJECTIVE #8: Identify costs and benefits of using an intermediate data-access object between an entity bean and the data resource.



EJBs are remote objects that consume significant system resources and network bandwidth. You can use Data Access Objects to encapsulate the logic required to access databases.


Data Access Objects:

Allow EJBs to delegate responsibility for database access and free them from complex data access routines. Make code more maintainable. Provide an easier migration path to CMP & Allow you to adapt data access to different schemas and different databases.


OBJECTIVE #9: State the benefits of bean pooling in an EJB container.



EJB clients interact with remote interfaces that are implemented by EJB Object instances. This indirect access to the bean instance means that there is no fundamental reason to keep a separate copy of each bean for each client. The server can keep a much smaller number of beans instantiated in a “pool” that serves a large number of clients. The container simply copies data into or out of these “pooled instances” as necessary. Benefit: greatly reduces the server resources required to server the same number of clients.


OBJECTIVE #10: State the benefits of passivation in an EJB container.



An EJB container can choose to temporarily serialize a bean and store it to the server filesystem or other persistent store so that it can optimally manage resources. This is called passivation. The benefit of passivation is to allow the EJB container to make the best possible use of server resources by passivating a bean to free up resources and then re-activating it when resources are available.


Note: A session bean can be passivated only between transactions, and not within a transaction.



OBJECTIVE #11: State the benefit of monitoring of resources in an EJB container.



An EJB container can achieve greater levels of performance and scalability by monitoring server resources and using techniques to share resources (such as instance pooling and activation/passivation) and also techniques to synchronize object interactions (such as managing concurrency, and transactional control) to optimally serve remote objects to clients.


OBJECTIVE #12: Explain how the EJB container does lifecycle management and has the capability to increase scalability.



The EJB container uses two fundamental strategies to perform lifecycle management, instance pooling and passivation/activation. Instance pooling reduces the number of component instances, and therefore resources, needed to service client requests. In addition, it is less expensive to reuse pooled instances than to frequently create and destroy instances. Since clients never access beans directly there’s no real reason to keep a separate copy of each bean for each client.


EJB Containers implement instance pooling (a significant performance and scalability strategy) by managing the lifecycle of beans. An entity bean exists in one of three states:
1. No State (has not been instantiated yet)
2. Pooled State (instantiated by not associated with the EJB Object)
3. Ready State (instantiated and associated).


If a client request is received and no entity bean instance is associated with the EJB Object connected to the client, an instance is retrieved from the pool and assigned to the EJB Object. Once an instance is assigned to an EJB object the ejbActivate method is called. This is called instance swapping. Instances are selected based on the vendor’s choice of a FIFO or LILO strategy. The container can manage the number of instances in the pool and hence control performance.

When an entity bean instance is activated, transient fields contain arbitrary values and must be reset in the ejbActivate method.


For a CMP bean the CMP fields are automatically synchronized with the database after ejbActivate has been invoked. In BMP after ejbActivate has completed you use the ejbLoad method to synchronize fields.


Entity beans can be passivated at any time provided the instance is not currently executing a method. Passivation of an entity bean is simply a notification that the instance is about to be disassociated from the EJB object. Stateful Session Beans maintain a conversational state between method invocations. The container must maintain this conversational state for the life of the bean’s service to the client. Stateful Session beans do not participate in instance pooling but rather the container may choose to evict a stateful Session bean from memory to conserve resources. The bean is passivated (i.e. has it’s “state” serialized to disk) and disassociated with the EJB Object. The client is completely unaware of this performance move on the part of the container. When the client invokes another method on the EJBObject, a new instance is created and populated with the passivated state (called “activation”).


The ejbPassivate method is called immediately prior to passivation of a bean instance. The ejbActivation method is called immediately following the successful activation of a bean instance.

Additional Notes

The EJB architecture is an architecture for component-based distributed computing. EJBs are components of distributed transaction-oriented enterprise applications.

Enterprise beans are intended to be relatively coarse-grained business objects (purchase order, employee record). Fine-grained objects (line item on a purchase order, employee’s address) should not be modeled as enterprise bean components.


Stateful Session

Note: Sun’s EJB 1.1 spec states “Session beans are intended to be stateful. The EJB specification also defines a stateless Session bean as a special case of a Session Bean. There are minor differences in the API between stateful (normal) Session benas and stateless Session beans.”


A typical (stateful) session object has the following characteristics:

Executes on behalf of a single client.

It is not shared among multiple clients.
Do not support concurrent access.
Can be transaction-aware
Updates shared data in an underlying database
Does NOT represent directly shared data in the database, although it may
Access and update that data
Is relatively short-lived
Is removed when the EJB container crashes. The client has to re-establish a new session object to continue computation


By definition, a session bean instance is an extension of the client that creates it:

Its fields contain a conversational state on behalf of the session object’s client.
It typical reads and updates data in a database on behalf of the client. Within a transaction, some of this data may be cached in the instance. Its lifetime is controlled by the client.

Find methods are not used in session beans.


When a remove method is called on a session bean it ends the bean’s service to the client. The remote reference becomes invalid and any conversational state maintained by the bean is lost.

As soon as the server removes a stateful session bean its handle is invalid and will throw a RemoteException when its getEJBObject() method is invoked.


A stateful session object has a unique identity that is assigned by the container at create time. This identity, in general, does not survive a crash and restart of the container, although a high-end container impl can mask container and server crashes from the client. Session objects appear to be anonymous to a client.


If a timeout occurs the ejbRemove() method is not called. A stateful session bean cannot time out while a transaction is in progress. A session bean can timeout while it is passivated. A stateful session bean cannot be removed while it is involved in a transaction. The home interface allows a client to obtain a handle for the home interface. The home handle can be serialized and written to storage. Later, possibly in a different JVM, the handle can be de-serialized from storage and used to obtain a reference to the home interface.


When the handle is later de-serialized, the session object it returns will work as long as the session object still exists on the server. (An earlier timeout or server crash may have destroyed the session object. In this case the handle is useless.) Typically, a session object’s conversational state is not written to the database. A session bean developer stores it in the session bean instance’s fields and assumes its value is retained for the lifetime of the instance. On the other hand, the session bean must explicitly manage cached database data. (It can do this when participating in a transaction.)

A session object’s conversational state may contain open resources, such as open sockets and open database cursors. A container cannot retain such open resources when a session bean instance is passivated. A developer of such a session bean must close and open the resources in the ejbPassivate and ejbActivate methods. As an example, the ejbPassivate method must close all JDBC connections and assign the instance’s fields that store these connections to null.


In EJB 1.1 when a bean is passivated the following types may be saved as part of the conversational state: JNDI ENC (javax.naming.Context, but only when it references the JNDI ENC)
references to SessionContext, refs to other beans (home & remote i/fs) the JTA UserTransaction type.

The container must maintain any instance fields that reference objects of these types as part of the conversational state even if they are not serializable. Primitive types and serializable objects can also be part of the conversational state. In EJB, transient fields are not necessarily set back to their initial values but can maintain their original values, or any arbitrary value, after being activated. Care should be taken when using transient fields, since their state following activation is implementation independent. The ejbActivate method is called following the successful activation of a bean instance and can be used to reset transient field values.


If the EJB container crashes, or a system exception is thrown by a method to the container, or a timeout occurs while the the bean is passivated then ejbRemove will not be invoked. This means any allocated resources will not be released. A session object’s conversational state is not transactional. It is not automatically rolled back to its initial state if the transaction in which the object has participated rolls back. The developer must use the afterCompletion method to manually reset the state.


Only a stateful session bean with container-managed transaction demarcation may implement the SessionSynchronization interface. This means that the bean can cache database data across several method calls before making an update. Clients are not allowed to make concurrent calls to a session object. This will cause a remote exception to be thrown to the second client.

A RuntimeException thrown from any method of a stateful session bean results in a transition to the “does not exist” state. From the client perspective, the corresponding stateful session object does not exist anymore. Subsequent calls through the remote interface will result in NoSuchObjectExceptions.

Calls to a session bean that has had ejbRemove invoked on it will result in NoSuchObjectExceptions.

Calls to a session bean that has timed out will result in NoSuchObjectExceptions.


Stateless Session

Stateless session beans have no conversational state.

All stateless session bean instances are equivalent when they are not involved in serving a client-invoked method.

The home interface of all stateless session beans contains one create() method with no arguments. This is a requirement of the EJB spec. This means that the bean class only has one ejbCreate() method. (The ejbCreate method is not actually invoked by the create method.)

A timeout or remove operation simply invalidates the remote reference for the client. The bean instance is not destroyed and is free to service other client requests.

The term “stateless” signifies that an instance has no state for a specific client. However, the instance variables of the instance can contain state across client-invoked calls. An example is a database connection.

Everything that a stateless session bean needs to know to perform an operation has to be passed via the method’s parameters.

Calling getPrimaryKey on a stateless session bean will cause a RemoteException to be thrown.

Calling remove(primaryKey) on a stateless (or stateful) session bean will cause a RemoteException to be thrown.

Passivation and Activation are not needed for stateless session beans.

All session objects of the same stateless session bean within the same home have the same object identity that is assigned by the container.

Applications that use stateless session beans may scale somewhat better than those using stateful session beans. However, this benefit may be offset by the increased complexity of the client application that uses stateless beans.

A stateless session bean must not implement the SessionSynchronization interface.

A RuntimeException thrown from any method of a stateless session bean results in a transition to the “does not exist” state. From the client perspective, the stateless session object continues to exist. The client can continue accessing the session object because the container can delegate the client’s requests to another instance.

Entity Bean

A typical entity object has the following characteristics:

Provides an object view of data in the database
Allows shared access from multiple users
Can be long-lived (lives as long as the data in the database)

The entity, its primary key, and its remote reference survive the crash of the EJB container. If the state of an entity was being updated by a transaction at the time the container crashed, the entity’s state is automatically reset to the state of the last committed transaction. The crash is not fully transparent to the client – the client may receive an exception if it calls an entity in a container that has had a crash.


A good rule of thumb is that entity beans model business concepts that can be expressed as nouns. They are usually persistent records in a database. Constructors should never be defined in the bean class. The default ctor must be available to the container. (It is automatically generated for a class if no ctors are provided.)


In EJB 1.1 the ejbCreate() method always returns the primary key type. With CMP this method returns null. It is the container’s responsibility to create the primary key. The findByPrimaryKey() method is not defined for CMP entity beans. With CMP you do not explicitly declare find methods in the bean class. The EJB container generates the find methods automatically at deployment time. In CMP any find method included in the home interface must be explained to the container. With BMP entity beans find methods must be defined in the bean class.


When a remove method is invoked on an entity bean, the remote reference becomes invalid and any data that it represents is deleted from the database. The entity bean’s persistence can be implemented directly in the entity bean class or in one or more helper classes (Data Access Objects) provided with the entity bean class (BMP) or it can be delegated to the container (CMP).

Directly coding the data access calls in the entity bean class may make it more difficult to adapt the entity bean to work with a database that has a different schema or with a different type of database. Use of a DAO could allow adapting the data access to different schemas or databases.

The advantage of using CMP is that the entity bean can be largely independent from the data source in which the entity is stored. CMP does not make use of DAOs (obviously).


For CMP the data access components are generated at deployment time by the container tools. An entity bean using CMP must not have explicit data access. The Bean Provider is responsible for using the cmp-field elements of the deployment descriptor to declare the instance’s fields that the container must load and store at defined times. The fields must be defined in the entity bean class as public and must not be defined as transient.


The types that can be assigned to a container-managed field are: Java primitives, Java serializable types, and references of EJBs remote or home interfaces.

CMP has the following limitations:

Data Aliasing - An update of a data item done through a container-managed field of one entity bean may not be visible to another entity bean in the same transaction if the other entity bean maps to the same data item.

Eager Loading of State – The container loads the entire object stateinto the container-managed fields before invoking the ejbLoad method.

An entity bean instance is always in one of the following three states:

Does not exist
No instance of the bean exists.



An instance in pooled state is not associated with any particular entity object identity.
Until it is requested the bean instance remains inactive unless it is used to service a find
request. Bean instances in the Pooled state typically service find requests since they aren’t busy and find methods do not depend on the instance’s state.


An instance in the ready state is assigned an entity object identity. A bean enters the Ready state via calls to ejbCreate/ejbPostCreate or ejbActivate. Once in the ready state the container can call ejbLoad or ejbStore on the bean zero or more times. Each entity bean has its own pool of instances. All instances in the pool are considered equivalent, and therefore any instance can be assigned by the container to any entity object identity at the transition to the ready state. The instance does not move to the ready state during the execution of a finder method.

The container can passivate an entity bean instance within a transaction. Passivating an entity bean causes the ejbStore method to be invoked and the ejbPassivate method returns the instance to the pooled state.


When a find method is executed, each bean found will be realized by transitioning an instance from the Pooled state to the Ready state. A RuntimeException thrown from any method of the entity bean class results in the transition to the “does not exist” state. From the client perspective, the corresponding entity object continues to exist. The client can continue accessing the entity object through its remote interface because the container can use a different instance to delegate the client’s requests.


The container is not required to maintain a pool of instances in the pooled state. The pooling approach is an example of a possible implementation, but it is not required.

An instance of an entity bean with BMP can cache the entity object’s state between business method invocations. An instance may choose to cache the entire entity object’s state, part of the state, or no state at all. When the container invokes ejbStore the instance must push all cached updates of the entity’s state to the database.


When the container invokes ejbLoad the instance must discard any cached entity object’s state. The instance may, but is not required to, refresh the cached state by reloading it from the database. The use of the ejbLoad and ejbStore methods for caching an entity object’s state in the instance works well only if the container can use transaction boundaries to drive the methods. Therefore, these methods are “unreliable” if the transaction attribute is set to Not Supported for the method or the bean.


The container will ensure appropriate synchronization for entity objects that are accessed concurrently from multiple transactions. If an instance of a non-reentrant entity bean executes a client request in a given transaction context and another request with the same transaction context arrives for the same entity object, the container will throw a RemoteException to the second client. This allows a bean to be written as single-threaded and non-reentrant. It is not recommended to write reentrant entity beans.


Deployment Descriptor

Deployment descriptors contain the following kinds of information:

EJB name
Class names of the home if, remote if, and the bean
Primary key class name
Persistence type (BMP vs CMP)
Field names of CMP fields
Environmental or property entries
References to other beans (using the home & remote if names)
References to external resources like database connections
Security roles for accessing the bean/bean methods
Method permissions that map security roles to the methods they may invoke
Transactional attributes of the beans methods

In container-transaction elements specific method declarations override more general declarations.

The following methods must be assigned tx attributes for each bean in the DD:


All business methods in the remote if (and superinterfaces)

Create methods in the home if

Find methods in the home if

Remove methods in the home (EJBHome) and remote (EJBObject) ifs



All business methods defined in the remore if (and superinterfaces)

In EJB 1.1 the bean deployer declares the timeout in a vendor-dependent manner. Timeouts are no longer included in the deployment descriptior. Named properties can be declared in a bean’s deployment descriptor. They can be Strings or wrapper types.


Transaction Management


Dirty Read:
A dirty read occurs when the first transaction reads uncommitted changes made by a second transaction. If the second transaction is rolled back, the data read by the first transaction becomes invalid because the rollback undoes the changes.


Repeatable Read:
A repeatable read is when the data read is guaranteed to look the same if read again during the same transaction.


Phantom Read:
Phantom reads occur when new records added to the database are detectable by transactions that started prior to the insert. Queries will include records added by other transactions after their transaction has started.


Isolation levels are commonly used in database systems to describe how locking is applied to data within a transaction.

Read Uncommitted:
The transaction can read uncommitted data. Dirty reads, nonrepeatable reads, and phantom reads can occur.

Read Committed:
The transaction cannot read uncommitted data. Dirty reads are prevented; nonrepeatable reads and phantom reads can occur. Bean methods with this isolation level cannot read uncommitted data.

Repeatable Read:
The tx cannot change data that is being read by a different tx. Dirty reads and nonrepeatable reads are prevented; phantom reads can occur. Bean methods with this isolation level have the same restrictions as Read Committed and can only execute repeatable reads.

The transaction has exclusive read and update privileges to data; different transactions can neither read nor write the same data. Dirty reads, nonrepeatable reads, and phantom reads are prevented. Most restrictive isolation level.


In EJB 1.1 isolation levels are not controlled through declarative attributes as was the case in EJB 1.0. In EJB 1.1, the deployer sets transaction isolation levels if the container manages the transaction. The bean developer sets the transaction isolation level if the bean manages the transaction.


As the isolation levels become more restrictive the performance of the system decreases because more restrictive isolation levels prevent transactions from accessing the same data.

Different EJB servers allow different levels of granularity for setting isolation levels; some servers defer this responsibility to the database. In some servers, you may be able to set different isolation levels for different methods, while other products may require the same isolation level for all methods in a bean, or possibly even all beans in the container.

Bean managed txs in stateful session beans, however, allow the bean developer to specify the transaction isolation level using the API of the resource providing persistent storage.

For stateless session beans transactions that are managed using the UserTransaction must be started and completed within the same method.


For stateful session beans a tx can begin in one method and be committed in another because a stateful session bean is only used by one client.



The remote interface for an enterprise bean defines the bean’s business methods. It also has a method for removing the bean. Remote interfaces extend the javax.ejb.EJBObject interface.

The home interface defines the bean’s life cycle methods (creating, removing, finding). Home interfaces extend the javax.ejb.EJBHome interface. Find methods are only used by entity beans.

The bean class actually implements the bean’s business methods.

The primary key is a simple class that provides a pointer into the database and is only needed by entity beans.


There is a standard mapping of the EJB architecture client-view contract to the CORBA protocols. The EJB-to-CORBA mapping not only enables on-the-wire interoperability among different impls of the EJB container but also allows non-Java clients to access EJBs through standard CORBA APIs.

The bean class must define a default constructor.

The bean class must not define the finalize method.

The ejbCreate method for an entity bean using BMP returns a PK instance.

The ejbCreate method for a CMP entity bean returns a null for the PK instance.

The ejbCreate method for a session bean returns void.

EJB 1.1 doesn’t use the JARs manifest. The first thing reading the JAR is the deployment descriptor. EJB 1.1 deployment descriptors are implemented in XML.


Having a client interact directly with entity beans is a common but bad design approach. Using a

session bean between a client and an entity bean is a better approach. Session beans, if used properly, can substantially reduce network traffic by limiting the number of requests over the network to perform a task. Session beans can also reduce the number of network connections needed by the client.


Conceptually, an EJB server may have many containers, each of which may contain one or more types of enterprise beans. EJB, by default, prohibits concurrent access to bean instances. In other words, several clients can be connected to one EJB object, but only one client thread can access the bean instance at a time. The EJB specification prohibits use of the synchronized keyword.

EJB prohibits beans from creating their own threads.


In EJB 1.1 all beans have a default JNDI context called the environment naming context (ENC). The default context exists in the namespace (directory) called “java:comp/env” and its subdirectories. When a bean is deployed any beans it uses are mapped into the directory “java:comp/env/ejb” so that the bean references can be obtained at runtime through a simple and consistent use of the JNDI ENC. This eliminates network and implementation specific use of JNDI to obtain bean references.


EJB 1.1 provides a new object called a HomeHandle. The HomeHandle can be obtained from EJBObject. It allows a remote home reference to be stored and used later. There is also a Handle object. A Handle is a serializable reference to the EJB object. A Handle allows an EJB object remote reference that points to the same type of session bean or the same unique entity bean that the handle came from to be recreated.


It might appear that Handle and the primary key do the same thing but they are different. Using the primary key requires you to have the correct EJB home – if you no longer have a reference to the EJB home you must use JNDI to get a new home.

The Handle object is easier to use because it encapsulates the details of doing a JNDI lookup on the container. With the Handle the correct EJB object can be obtained in one method call rather than three. A handle can be used with either type of EJB. A handle allows you to work with a stateful session bean, deactivate the bean, and then reactivate it later using the handle.


The Handle is less stable than the PK because it relies on the networking configuration and naming to remain stable. Java RMI requires that all parameters and return values be either Java primitive types or objects that implement the Serializable interface. Serializable objects are passed by copy (passed by value).






To be usable across national and regional boundaries, applications have to adapt to local customs. Cultural-dependent aspects of an application that need to be adapted include:

Data Input

Data Storage

Data Formatting

Localizing content


These aspects affect the following objects:

Text: Messages, Labels, Errors, other content

Numbers: Plain numbers, currency values, percentage values

Dates and Time

Images, Sounds and other data


Internationalization is abbreviated as I18N because there are 18 chars between I and N.


An internationalized program has the following characteristics:


With the addition of localization data, the same executable can run worldwide.

Textual elements, such as status messages and the GUI component labels, are not hard-coded in
the program. Instead they are stored outside the source code and retrieved dynamically.

Support for new languages does not require recompilation. It can be localized quickly.


Localization is the process of adapting a program for use in a specific locale. A locale is a geographic or political region that shares the same language and customs. Localization includes the translation of text such as GUI labels, error messages, and online help. It also includes the culture-specific formatting of data items such as monetary values, times, dates, and numbers.




1) State three aspects of any application that might need to be varied or customized in different deployment locales.


There are many types of data that vary with region or language. Examples of this data are:

Labels on GUI components
Online help
Phone numbers
Honorifics and personal titles
Postal addresses
Page layouts


2) Match the following features of the Java 2 platform with descriptions of their functionality, purpose or typical uses: Properties, Locale, ResourceBundle, Unicode, java.text package, InputStreamReader and OutputStreamWriter.


The java.util.Properties class represents a persistent set of properties. The Properties can be saved to a stream or loaded from a stream. Each key and its corresponding value in the property list is a string. A properties file stores information about the characteristics of a program or environment including internationalization/localization information.


By creating a Properties object and using the load method a program can read a properties file. The program can then access the values by using the key as follows:

Properties props = new Properties();
props.load(new BufferedInputStream(new FileInputStream(“filename”);
String value = System.getProperty(“key”);

If the key is not found getProperty returns null.

A java.util.Locale object represents a specific geographical, political, or cultural region. An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to tailor information for the user. For example, displaying a number is a locale-sensitive operation--the number should be formatted according to the customs/conventions of the user's native country, region, or culture.


Because a Locale object is just an identifier for a region, no validity check is performed when you construct a Locale. If you want to see whether particular resources are available for the Locale you construct, you must query those resources. For example, ask the NumberFormat for the locales it supports using its getAvailableLocales method.

Note: When you ask for a resource for a particular locale, you get back the best available match, not necessarily precisely what you asked for. For more information, refer to the ResourceBundle section.

The Locale class provides a number of convenient constants that you can use to create Locale objects for commonly used locales. For example, the following creates a Locale object for the United States:

Once you've created a Locale you can query it for information about itself. Use getCountry to get the ISO Country Code and getLanguage to get the ISO Language Code. You can use getDisplayCountry to get the name of the country suitable for displaying to the user. Similarly, you can use getDisplayLanguage to get the name of the language suitable for displaying to the user. Interestingly, the getDisplayXXX methods are themselves locale-sensitive and have two versions: one that uses the default locale and one that uses the locale specified in the argument.


Resource bundles contain locale-specific objects. When your program needs a locale-specific resource, a String for example, your program can load it from the resource bundle that is appropriate for the current user's locale. In this way, you can write program code that is largely independent of the user's locale isolating most, if not all, of the locale-specific information in resource bundles.

This allows you to write programs that can:
· be easily localized, or translated, into different languages
· handle multiple locales at once
· be easily modified later to support even more locales

One resource bundle is, conceptually, a set of related classes that inherit from ResourceBundle. Each related subclass of ResourceBundle has the same base name plus an additional component that identifies its locale. For example, suppose your resource bundle is named MyResources. The first class you are likely to write is the default resource bundle which simply has the same name as its family--MyResources. You can also provide as many related locale-specific classes as you need: for example, perhaps you would provide a German one named MyResources_de.
Each related subclass of ResourceBundle contains the same items, but the items have been translated for the locale represented by that ResourceBundle subclass. For example, both MyResources and MyResources_de may have a String that's used on a button for canceling operations. In MyResources the String may contain Cancel and in MyResources_de it may contain Abbrechen.

If there are different resources for different countries, you can make specializations: for example, MyResources_de_CH is the German language (de) in Switzerland (CH). If you want to only modify some of the resources in the specialization, you can do so.
When your program needs a locale-specific object, it loads the ResourceBundle class using the getBundle method:
ResourceBundle myResources =
ResourceBundle.getBundle("MyResources", currentLocale);

The first argument specifies the family name of the resource bundle that contains the object in question. The second argument indicates the desired locale. getBundle uses these two arguments to construct the name of the ResourceBundle subclass it should load as follows.

The resource bundle lookup searches for classes with various suffixes on the basis of (1) the desired locale and (2) the current default locale as returned by Locale.getDefault(), and (3) the root resource bundle (baseclass), in the following order from lower-level (more specific) to parent-level (less specific):
baseclass + "_" + language1 + "_" + country1 + "_" + variant1
baseclass + "_" + language1 + "_" + country1 + "_" + variant1 + ".properties"
baseclass + "_" + language1 + "_" + country1
baseclass + "_" + language1 + "_" + country1 + ".properties"
baseclass + "_" + language1
baseclass + "_" + language1 + ".properties"
baseclass + "_" + language2 + "_" + country2 + "_" + variant2
baseclass + "_" + language2 + "_" + country2 + "_" + variant2 + ".properties"
baseclass + "_" + language2 + "_" + country2
baseclass + "_" + language2 + "_" + country2 + ".properties"
baseclass + "_" + language2
baseclass + "_" + language2 + ".properties"
baseclass + ".properties"

For example, if the current default locale is en_US, the locale the caller is interested in is fr_CH, and the resource bundle name is MyResources, resource bundle lookup will search for the following classes, in order:


The result of the lookup is a class, but that class may be backed by a properties file on disk. That is, if getBundle does not find a class of a given name, it appends ".properties" to the class name and searches for a properties file of that name. If it finds such a file, it creates a new PropertyResourceBundle object to hold it.


Following on the previous example, it will return classes and and files giving preference as follows: (class) MyResources_fr_CH


(class) MyResources_fr


(class) MyResources_en_US


(class) MyResources_en


(class) MyResources



If a lookup fails, getBundle() throws a MissingResourceException.

The baseclass must be fully qualified (for example, myPackage.MyResources, not just MyResources). It must also be accessable by your code; it cannot be a class that is private to the package where ResourceBundle.getBundle is called.

Note: ResourceBundles are used internally in accessing NumberFormats, Collations, and so on. The lookup strategy is the same.

Resource bundles contain key/value pairs. The keys uniquely identify a locale-specific object in the bundle. Here's an example of a ListResourceBundle that contains two key/value pairs:

class MyResource extends ListResourceBundle {
public Object[][] getContents() {
return contents;
static final Object[][] contents = {
{"OkKey", "OK"},
{"CancelKey", "Cancel"},

Keys are always Strings. In this example, the keys are OkKey and CancelKey. In the above example, the values are also Strings--OK and Cancel--but they don't have to be. The values can be any type of object.

You retrieve an object from resource bundle using the appropriate getter method. Because OkKey and CancelKey are both strings, you would use getString to retrieve them:

button1 = new Button(myResourceBundle.getString("OkKey"));
button2 = new Button(myResourceBundle.getString("CancelKey"));

The getter methods all require the key as an argument and return the object if found. If the object is not found, the getter method throws a MissingResourceException. Besides getString; ResourceBundle supports a number of other methods for getting different types of objects such as getStringArray. If you don't have an object that matches one of these methods, you can use getObject and cast the result to the appropriate type. For example:
int[] myIntegers = (int[]) myResources.getObject("intList");

NOTE: You should always supply a baseclass with no suffixes. This will be the class of "last resort", if a locale is requested that does not exist. In fact, you must provide all of the classes in any given inheritance chain that you provide a resource for. For example, if you provide MyResources_fr_BE, you must provide both MyResources and MyResources_fr or the resource bundle lookup won't work right.

The Java 2 platform provides two subclasses of ResourceBundle, ListResourceBundle and PropertyResourceBundle, that provide a fairly simple way to create resources. (Once serialization is fully integrated, we will provide another way.) As you saw briefly in a previous example, ListResourceBundle manages its resource as a List of key/value pairs. PropertyResourceBundle uses a properties file to manage its resources.

If ListResourceBundle or PropertyResourceBundle do not suit your needs, you can write your own ResourceBundle subclass. Your subclasses must override two methods: handleGetObject and getKeys(). The following is a very simple example of a ResourceBundle subclass, MyResources, that manages two resources (for a larger number of resources you would probably use a Hashtable). Notice that if the key is not found, handleGetObject must return null. If the key is null, a NullPointerException should be thrown. Notice also that you don't need to supply a value if a "parent-level" ResourceBundle handles the same key with the same value (as in United Kingdom below). Also notice that because you specify an en_GB resource bundle, you also have to provide a default en resource bundle even though it inherits all its data from the root resource bundle.

// default (English language, United States)
abstract class MyResources extends ResourceBundle {
public Object handleGetObject(String key) {
if (key.equals("okKey")) return "Ok";
if (key.equals("cancelKey")) return "Cancel";
return null;

// German language
public class MyResources_de extends MyResources {
public Object handleGetObject(String key) {
// don't need okKey, since parent level handles it.
if (key.equals("cancelKey")) return "Abbrechen";
return null;

You do not have to restrict yourself to using a single family of ResourceBundles. For example, you could have a set of bundles for exception messages, ExceptionResources (ExceptionResources_fr, ExceptionResources_de, ...), and one for widgets, WidgetResource (WidgetResources_fr, WidgetResources_de, ...); breaking up the resources however you like.



Unicode is an international effort to provide a single character set that everyone can use.

Java uses the Unicode 2.0 (or 2.1) character encoding standard. In Unicode, every character occupies two bytes. Ranges of character encodings represent different writing systems or other special symbols. For example, Unicode characters in the range 0x0000 through 0x007F represent the basic Latin alphabet, and characters in the range 0xAC00 through 0x9FFF represent the Han characters used in China, Japan, Korea, Taiwan, and Vietnam.


UTF is a multibyte encoding format, which stores some characters as one byte and others as two or three bytes. If most of your data is ASCII characters, it is more compact than Unicode, but in the worst case, a UTF string can be 50 percent larger than thecorresponding Unicode string. Overall, it is fairly efficient.

Despite the advantages of Unicode, there are some drawbacks: Unicode support is limited on many platforms because of the lack of fonts capable of displaying all the Unicode characters.



This package provides classes and interfaces for handling text, dates, numbers, and messages in ways that are independent of natural languages. This allows programs to be written in a language-independent manner and relies on separate, dynamically linked localized resources.

These classes are capable of formatting dates, numbers, and messages, parsing; searching and sorting strings; and iterating over characters, words, sentences, and line breaks. This package contains three main groups of classes and interfaces:


Classes for iteration over text
Classes for formatting and parsing
Classes for string collation


Some of the classes in the java.text package are:



An Annotation object is used as a wrapper for a text attribute value if the attribute has annotation characteristics. These characteristics are:


The text range that the attribute is applied to is critical to the semantics of the range. That means, the attribute cannot be applied to subranges of the text range that it applies to, and, if two adjacent text ranges have the same value for this attribute, the attribute still cannot be applied to the combined range as a whole with this value.


The attribute or its value usually no longer applies if the underlying text is changed.

An example is grammatical information attached to a sentence: For the previous sentence, you can say that "an example" is the subject, but you cannot say the same about "an", "example", or "exam". When the text is changed, the grammatical information typically becomes invalid. Another example is Japanese reading information (yomi).


Wrapping the attribute value into an Annotation object guarantees that adjacent text runs don't get merged even if the attribute values are equal, and indicates to text containers that the attribute should be discarded if the underlying text is modified.



A CollationKey represents a String under the rules of a specific Collator object. Comparing two
CollationKeys returns the relative order of the Strings they represent. Using CollationKeys to compare. Strings is generally faster than using Thus, when the Strings must be compared multiple times, for example when sorting a list of Strings. It's more efficient to use CollationKeys.


You can’t create CollationKeys directly. Rather, generate them by calling Collator.getCollationKey. You can only compare CollationKeys generated from the same Collator object. Generating a CollationKey for a String involves examining the entire String and converting it to series of bits that can be compared bitwise. This allows fast comparisons once the keys are generated. The cost of generating keys is recouped in faster comparisons when Strings need to be compared many times. On the other hand, the result of a comparison is often determined by the first couple of characters of each String. examines only as many characters as it needs which allows it to be faster when doing single comparisons.



The Collator class performs locale-sensitive String comparison. You use this class to build searching and sorting routines for natural language text. Collator is an abstract base class. Subclasses implement specific collation strategies. You can use the static factory method, getInstance, to obtain the appropriate Collator object for a given locale.



Format is an abstract base class for formatting locale-sensitive information such as dates, messages, and numbers. Format defines the programming interface for formatting locale-sensitive objects into Strings (the format method) and for parsing Strings back into objects (the parseObject method). Any String formatted by format is guaranteed to be parseable by parseObject.

Format has three subclasses: DateFormat, MessageFormat, NumberFormat




An InputStreamReader is a bridge from byte streams to character streams: It reads bytes and translates them into characters according to a specified character encoding. The encoding that it uses may be specified by name, or the platform's default encoding may be accepted.

The class has two constructors: one that uses the platform’s default encoding and one that takes an encoding (as a String). Encodings can be represented by their ISO numbers, i.e., ISO 8859-9 is represented by “8859_9”.


There is no easy way to determine which encodings are supported. You can use the getEncoding() method to get the name of the encoding being used by the Reader.

Characters that do not exist in a specific character set produce a substitution character, usually a question mark.




An OutputStreamWriter is a bridge from character streams to byte streams: Characters written to it are translated into bytes according to a specified character encoding. The encoding that it uses may be specified by name, or the platform's default encoding may be accepted.


Each invocation of a write() method causes the encoding converter to be invoked on the given character(s). The resulting bytes are accumulated in a buffer before being written to the underlying output stream. The size of this buffer may be specified, but by default it is large enough for most purposes. Note that the characters passed to the write() methods are not buffered.


The class has two constructors: one that uses the platform’s default encoding and one that takes an encoding (as a String). Encodings can be represented by their ISO numbers, i.e., ISO 8859-9 is represented by “8859_9”.


There is no easy way to determine which encodings are supported. You can use the getEncoding() method to get the name of the encoding being used by the Writer.

Characters that do not exist in a specific character set produce a substitution character, usually a question mark.


Java 2 API

Java I/O E. R. Harold, O’Reilly, 1999


Locale class (java.util)


Locale class captures properties of a local environment: language and region.

Language is specified by ISO 639 two-letter code in lowercase, e.g. fr, en

Region is specified by ISO 3166 two-letter code in uppercase, e.g. CA, US


Getting locales:

1.      Locale.getDefault()

2.      2. new Locale(lang, region)

3.      3. Locale.UK (this is a constant).


Each JVM has a default locale which can be changed by specifying the system properties ‘user.language’ and ‘user.region’.


Check available locales: Locale.getAvailableLocales() or DateFormat.getAvailableLocales()


ResourceBundle class (java.util)

A ResourceBundle is a container for key-value pairs that are locale-dependent.

Can be specified in a .properties file or in a class (best derived from ListResourceBundle).

Retrieval: ResourceBundle.getBundle(name[, locale]). Looks for most specific first, then drops region, language, tries default locale instead of desired, then overall default. Prefers classes over .properties.

Access: getString(name), getStringArray(name), getObject(name)


Formatters (java.text)

Base class Format, Usage: ‘format(something)’ for output and ‘parse(String)’ for input



Creation: getNumberInstance, getPercentageInstance, getCurrencyInstance (pass Locale optionally)



Creation: getDateInstance, getTimeInstance(), getDateTimeInstance (pass Locale optionally)


Composition: Uses a NumberFormat for time formatting.

Subclass: SimpleDateFormat allows specifying date/time formats in details using pattern strings.



Creation: new MessageFormat(pattern), also static usage via format(pattern, arguments)

Replaces parameters in message strings, e.g. “{0} not found at {1, time}” becomes “ not found at 11:30 PM”, but with a different pattern becomes “11 Oct 99, 11:30:00: missing”


Additional notes

Properties are just an extension of Hashtable for representing key-value pairs; they don’t support internationalisation explicitly.



The primitive Java character type and the Java String class are based on Unicode. Unicode is a two-byte character set that includes all relevant symbols of languages currently in use throughout the world. Unicode comes with a variety of encodings

To actually display certain characters, relevant fonts have to be installed.

The Collator class (java.text) should be used for sorting international strings.


Java IO

Input and output streams operate on bytes, whereas readers and writers operate on characters. Characters are based on Unicode and support internationalisation.

Where byte and character streams meet, a character encoding is needed. The classes OutputStreamWriter and InputStreamReader use encodings to read and write characters to and from byte streams. JVM’s may differ in what encodings are supported.

There is always a default encoding for any JVM. Examples of encodings are UTF-8 and ISO-LATIN-1, as defined by the IANA Charset Registry.


Legacy Connectivity


Distinguish appropriate from inappropriate techniques for providing access to a legacy system from Java code given an outline description of that legacy system.


Upgrading Client-Tier GUIs


In cases where the GUI is loosely coupled to the other legacy tiers you can use an applet or a small application to replace the GUI. Applets can communicate with the other tiers via TCP sockets. The applet can be signed and trusted, if necessary to access resources. Applets can also communicate with COM and CORBA objects (using bridge or Java IDL).


Screen Scrapers


Screen scrapers may be used to integrate applet (or other) interface with an existing system. They are particularly useful when the client interface is tightly coupled to the other tiers of the system.

A screen scraper is an application that translates an existing client interface into a set of objects.

Screen scrapers usually function as a terminal emulator on one end and an object interface on the other. The screen scraper is configured to read data from terminal fields of the legacy interface and make them available via objects.


Screen scrapers have the following advantages:

§         Provides a low-level object-based interface to the legacy app.

§         Allows you to build a new GUI over the existing client interface.


Disadvantages of screen scrapers:

§         Any changes to the legacy interface can break the new GUI.

§         Prone to causing errors in the new GUI because of unexpected outputs from the legacy interface.

§         Prone to causing the new GUI to “freeze” when the legacy interface is expecting input that the screen scraper in unaware of.


Object Mapping Tools


Object mapping tools can be used if you choose to ignore the existing legacy interface and access the underlying tiers directly. These tools are used to create proxy objects that access legacy system functions and make them available in an object-oriented form. Object mapping tools are usually more effective than screen scrappers because they are not dependent on the format generated by the existing legacy interface.


Upgrading Application Business Logic


Java servlets provide a capability to make existing applications available via an intranet or the Internet. Clients (browsers and/or applets) access servlets via HTTP or HTTPS. The servlets take the requests and communicate with the legacy system. EJBs provide a component-based approach to upgrading legacy applications. Java’s support for CORBA enables CORBA objects to be accessed from Java and Java objects to be accessed as CORBA objects. Microsoft’s JVM provides (or used to provide) a bridge between Java and COM objects. JNI may be used to write custom code to interface new business logic with an existing legacy system.


Upgrading the Data Storage Tier


JDBC may be used to access relational databases in a legacy system.

In many cases the legacy database will not support a pure JDBC driver. If the database provides ODBC support the JDBC-ODBC bridge can be used. If the existing legacy database is hierarchical or flat-file then it may be able to be imported into an RDBMS.


Securing Legacy System Components


Retrofitting a system with security is generally more expensive and less productive the redesigning and redeveloping the system to operate in a secure manner. However, budget constraints may prevent this. Legacy systems may be isolated from threats by placing them behind a firewall.


Access control to legacy systems can be controlled by requiring users and external applications to authenticate themselves with the firewall before they can access the legacy system. Auditing features of the legacy system should be used to determine who is accessing the legacy system and when. A VPN may be used to secure all communications with a legacy system.


Messaging & Protocols


Identify scenarios that are appropriate to implementation using messaging, EJB, or both.

The current EJB spec (1.1) defines beans that are invoked synchronously via method calls from EJB clients.


JMS provides client interfaces that can interface with point-to-point and publish-subscribe systems.

In the future (EJB 2.0) will add a form of asynchronous bean that is invoked when a JMS client sends it a message.


List benefits of synchronous and asynchronous messaging.

Select scenarios from a list that are appropriate to implementation using synchronous and asynchronous messaging.

Asynchronous messaging

Loose coupling between sender and receiver
Does not block sender
Network does not need to be available, messages can be queued
Least demanding on comm. mechanisms

Good for publish-subscribe

Must use messaging to get reliability (?)

Synchronous messaging

Tight coupling between sender and receiver
Blocks sender until receiver is finished processing
Network must be available
More demanding on comm. Mechanisms

Good for transaction processing
Fail-safe comm.
Coping with error situations

Java Messaging Service (JMS)

JMS provides a common way for Java programs to create, send, receive and read an enterprise messaging system’s messages.

JMS defines a set of message interfaces.

JMS provides client interfaces for point-to-point (PTP) and publish-subscribe systems.



built around the concept of message queues

each message is addressed to a specific queue; clients get messages from the queue(s) created to hold their messages



Publishers address messages to a node or address

System distributes the messages arriving from a publisher to the subscribers of that publisher


Nothing prevents a JMS application from combining PTP and publish-subscribe but JMS focuses on applications that use one approach or the other.

JMS does NOT include the following:

Load balancing/fault tolerance
Error/advisory notification
Wire protocol
Message Type Repository



Message-Oriented-Middleware (MOM) provides a common reliable way for programs to create, send, receive and read messages in any distributed Enterprise System. MOM ensures fast, reliable asynchronous electronic communication, guaranteed message delivery, receipt notification and transaction control.The Java Message Service (JMS) provides a standard Java-based interface to MOM.



Point-To-Point Messaging: Multiple senders, single receiver

Publish-Subscribe Messaging: Multiple senders, multiple receivers

Request-Reply Messaging: This is the normal synchronous model and can be implemented by using two related asynchronous messages.



The JMS API defines separate interfaces for point-to-point (Queue) and publish-subscribe (Topic), so that service providers can choose to support just one model. Supports distributed transactions.


Package: javax.jms


Class overview

Base class:













/ -Browser









MessageListener interface for MessageConsumer: void onMessage(Message msg).

There are also various message classes depending on content type, e.g. TextMessage, BytesMessage, MapMessage.


QueueBrowser can be used to look at a queue without consuming any messages.

The call createSubscriber creates a subscriber that only sees messages while connected, whereas with createDurableSubscriber messages are queued for the subscriber.



Lookup concrete ConnectionFactory through JNDI. Lookup topic/queue trough JNDI. Get connection from factory. Call start on connection. Create session from connection (passing flags for acknowledgement, transaction mode).


Sender: Create sender from session (passing queue).  Create message from session. Send message using sender instance, passing delivery mode and message instance.


Receiver: Create receiver from session (passing queue). In endless loop: Call receive on Receiver, returning message, handle message.


Publisher: Create publisher from session (passing topic). Create message from session. Publish message using publisher instance.


Subscriber: Create subscriber from session (passing topic). Call setMessageListener on subscriber.

At the end: Close session. Close connection.







Given a scenario description, distinguish appropriate from inappropriate protocols to implement that scenario. Identify a protocol, given a list of some of its features, where the protocol is one of the following:


HTTP Properties: Client-Server Architecture

The HTTP protocol is based on a request/response paradigm. The communication generally takes place over a TCP/IP connection on the Internet. The default port is 80, but other ports can be used. This does not preclude the HTTP/1.0 protocol from being implemented on top of any other protocol on the Internet, so long as reliability can be guaranteed.


The HTTP protocol is connectionless and stateless After the server has responded to the client's request, the connection between client and server is dropped and forgotten. There is no "memory" between client connections. The pure HTTP server implementation treats every request as if it was brand-new, i.e. without context.


An extensible and open representation for data types HTTP uses Internet Media Types (formerly referred to as MIME Content-Types) to provide open and extensible data typing and type negotiation. When the HTTP Server transmits information back to the client, it includes a MIME-like (Multipart Internet Mail Extension) header to inform the client what kind of data follows the header. Translation then depends on the client possessing the appropriate utility (image viewer, movie player, etc.) corresponding to that data type.

HTTPS(Secure Hypertext Transfer Protocol)


HTTPS (Secure Hypertext Transfer Protocol) is a Web protocol developed by Netscape and built into its browser that encrypts and decrypts user page requests as well as the pages that are returned by the Web server. HTTPS is really just the use of Netscape's Secure Socket Layer (SSL) as a sublayer under its regular HTTP application layer. (HTTPS uses port 443 instead of HTTP port 80 in its interactions with the lower layer, TCP/IP.) SSL uses a 40 or 128-bit key size for the RC4 stream encryption algorithm, which is considered an adequate degree of encryption for commercial exchange.


Suppose you use a Netscape browser to visit a Web site such as NetPlaza ( and view their catalog. When you're ready to order, you will be given a Web page order form with a URL that starts with https://. When you click "Send," to send the page back to the catalog retailer, your browser's HTTPS layer will encrypt it. The acknowledgement you receive from the server will also travel in encrypted form, arrive with an https:// URL, and be decrypted for you by your browser's HTTPS sublayer.


HTTPS and SSL support the use of X.509 digital certificates from the server so that, if necessary, a user can authenticate the sender. SSL is an open, nonproprietary protocol that Netscape has proposed as a standard to the World Wide Consortium (W3C). HTTPS is not to be confused with SHTTP, a security-enhanced version of HTTP developed and proposed as a standard by EIT.




CORBA and IIOP assume the client/server model of computing in which a client program always makes requests and a server program waits to receive requests from clients. When writing a program, you use an interface called the General Inter-ORB Protocol (GIOP). The GIOP is implemented in specialized mappings for one or more network transport layers. Undoubtedly, the most important specialized mapping of GIOP is IIOP, which passes requests or receives replies through the Internet's transport layer using the Transmission Control Protocol (TCP). Other possible transport layers would include IBM's Systems Network Architecture (SNA) and Novell's IPX.


For a client to make a request of a program somewhere in a network, it must have an address for the program. This address is known as the Interoperable Object Reference (IOR). Using IIOP, part of the address is based on the server's port number and Internet Protocol (IP) address. In the client's computer, a table can be created to map IORs to proxy names that are easier to use. The GIOP lets the program make a connection with an IOR and then send requests to it (and lets servers send replies). A Common Data Representation (CDR) provides a way to encode and decode data so that it can be exchanged in a standard way.


CORBA is not the only architecture that uses IIOP. Because a TCP/IP-based proxy is usable on almost any machine that runs today, more parties now use IIOP. When another architecture is IIOP-compliant, it not only establishes a well-proven communication transport for its use, but it also can communicate with any ORB implementation that is IIOP-compliant. The possibilities are endless.





The Transport layer employs JRMP, also known as the RMI Wire Protocol, to send method invocations and associated parameters and to return values and exceptions from one Java virtual machine (JVM) to another. JRMP is a simple protocol consisting of five messages, plus an extra five for multiplexing flow control.


All JRMP sessions consist of a header followed by one or more messages. The header contains just the ASCII codes for the characters JRMI , the protocol version, and the "subprotocol" to be used. There are three subprotocols: SingleOpProtocol, StreamProtocol, and MultiplexProtocol. SingleOpProtocol signifies that only one message follows a header before the end of a session (i.e., the connection closes). StreamProtocol and MultiplexProtocol can transfer one or more messages. The latter is used when multiplexing calls from both client and server on a single socket, as described below.


Communicating clients and servers typically each open a socket to the other (i.e., both systems connect and listen for connections). The client's socket typically invokes methods on server-side objects, and the server's socket calls client-side objects (e.g., callbacks). The figure shows a hypothetical StreamProtocol situation. The client sends the Call message to invoke a server object's method; the server then invokes this method and replies with a Return containing any results. Assuming that a remote object is returned, the client then sends a DgcAck message to let the server's garbage collector know that it has received the remote object. On another socket, the server sends a Ping to find out whether the client is alive, which replies with a PingAck.


Default applet security restrictions deny applets the right to open sockets back to any server other than their originating host; they also block any attempt to listen for socket connections. This being the case, how do clients listen for server connections?


Enter the MultiplexProtocol and its group of five messages: Open , Close , CloseAck , Request, and Transmit. They allow client and server to simulate the StreamProtocol's two-way communication using a single socket. In the current implementation, up to 256 virtual connections can be opened, each identified by a unique ID.


Unfortunately, connecting via a socket back to the server is not always possible for applets running behind firewalls (e.g., on a corporate intranet), which typically block any attempt to open a socket back to the Internet. Should it fail to open a connection, an RMI client wraps its method invocation inside the body of an HTTP request (which is the protocol browsers use to communicate with Web servers), and the RMI server sends any results as an HTTP response.


This workaround is a smart solution, since HTTP is a firewall-trusted protocol. Still, performance takes a hit due to the time needed to convert messages to HTTP requests. In addition, no multiplexing of invocations can be accomplished, because keeping the connection open between client and server is not part of HTTP 1.0. The primary reason for SingleOpProtocol's existence is to encapsulate RMI through HTTP.



Who they talk to


HTTP HTML forms based client interacts with the servlet on the server. Server interacts with the business layer and business layer interacts with the persistence layer. RMI This is a possible if the objects in the user interface and the business layers are all Java objects. The persistence layer is mostly accessed through JDBC. Other relational object mapping of the data layer is also possible.


Advantage of RMI Object are passed by value. The server/ client can reconstitute the objects easily. Data type can be any Java objects. Any Java objects can be passed as arguments. Arguments has to implement the serializable interface Disadvantage of RMI Heterogeneous objects are not supported. Corba If the objects in the client layer and the business layer are heterogeneous, i.e. the objects are implemented in C, C++ Java, Smalltalk then Corba is most suitable. Advantage of Corba Heterogeneous objects are supported. Disadvantage of Corba Objects are not passed by value, only the argument data is passed. The server/ client has to reconstitute the objects with the data. Only commonly accepted data types can be passed as arguments. Dcom This works best in windows environment. Distributed Object Communication


§         HTTP Simple, Established Has to communicate to a Servlet, Java Server pages

§         Cannot communicate to a Java class directly

§         RMI Object are passed by value. The server/ client can reconstitute the objects easily. Object are passed by reference Data type can be any Java objects. Any Java objects can be passed as arguments. Arguments has to implement the Serializable interface. \

§         Heterogeneous objects are not supported.

§         Corba Heterogeneous objects are supported. Objects are not passed by value, only the argument data is passed. The server/ client has to reconstitute the objects with the data. Only commonly accepted data types can be passed as arguments.

§         Dcom If windows is the deployment platform suits well with the operating system This works in windows environment at best.

§         Distributed Object Frameworks Distributed Object Frameworks are RMI, Corba, Dcom, EJB. Basic Three-Tier Java Technology Architecture The three-Tier Java Technology Architectures achieved by HTML, Applet, Java Application on the client. Servlet, Java Server Pages on the Middle Tier, JDBC communication to the persistence or Database layer


HTTP and HTTPS are very similar protocols with only the fact that HTTPS provides a layer of security(the SSL). They are both capable of passing a variety of data types but there is no logic, objects may only be executed if there's another protocol to handle them. HTTP is the lowest layer of logic and can only be used as a delivery mechanism for other protocols. JRMP is a robust object server that communicates well when working with JAVA based objects. It is capable of passing objects refrences rather than just values that have to be reconstituted so that the object may be executed by the client rather than the server. In the even that the server is secure or cannot communicate in the most efficient manner JRMP falls back to HTTP.


JRMP is only capable of passing JAVA objects. IIOP is the most flexible of the transport mechanisms, it can communicate objects created in C, C++, JAVA, and smalltalk but only passes data by value requiring the server to do all the work and requiring that only common data types be passed as arguments making it more restrictive than JRMP which allows any JAVA data type.


The Transport control protocol and Internet protocol (TCP/IP) form the Internet’s transmission basis.

IP: Lower level, packet-based, connectionless, no guaranteed delivery.

TCP: Higher level, connection-oriented, error-correction.

Other application-level protocols are based on TCP/IP, e.g. FTP, HTTP.


SSL runs above TCP/IP and below application-level protocols. It supports client and server authentication, and privacy through encryption.

Handshake process: Authenticate server to client, select cipher supported by both client and server, optionally authenticate client to server, use public-key encryption to exchange keys, establish encrypted SSL session.


Port overview

Standard port





File transfer protocol



Remote login



Simple mail transfer protocol



WWW transmission protocol






Post office protocol









Simple network management protocol



Lightweight directory access protocol









Secure Socket Layer



Java remote method protocol



Layering to Tunneling Protocol






Internet Inter-Orb Protocol





Select from a list security restrictions that Java 2 environments normally impose on applets running in a browser. The Java 2 security model is policy-based and has superseded the sandbox/trusted approach of Java 1.1. In Java 1.1 remote code (applets, for example) that was not trusted was constrained to the sandbox. If the remote code was signed and trusted then it could access local resources.


Cryptography, Digital signatures and Certificates can be used to increase the security of a system. Java offers a number of interfaces for related services. Firewalls are also important for protecting the gateway between trusted and untrusted networks.


Code Source:A combination of a set of signers (certificates) and a code base URL.By default, Java 2 uses a policy file to associate permissions with code sources

Security Policy File: permission is the right to access a protected resource or guarded object. For Java 2 permissions are specified in the security policy file. Only one policy is in effect at a time. A policy file consists of a number of grant entries. Each grant entry describes the permissions (one or multiple) granted to a code source.


Policy class: You can use to create your own security policy. package : The following are some of the classes in the package:


CodeSource – This class extends the concept of a codebase to encapsulate not only the location (URL) but also the certificate(s) that were used to verify signed code originating from that location.


KeyStore – This class represents an in-memory collection of keys and certificates. It manages keys and trusted certificates.


MessageDigest – The MessageDigest class provides applications the functionality of a message digest algorithm, such as MD5 or SHA.


Permission – Abstract class for representing access to a system resource.

Policy – This is an abstract class for representing the system security policy for a Java application environment (specifying which permissions are available for code from various sources).


ProtectionDomain – The ProtectionDomain class encapulates the characteristics of a domain, which encloses a set of classes whose instances are granted the same set of permissions.


Security – Centralizes all security properties and common security methods.


Given an architectural system specification, identify appropriate locations for implementation of specified security features, and select suitable technologies for implementation of those features.


Exposure to threats can be mitigated by using:

Authentication, Authorization (ACLs), Protecting Messages, Auditing


Web tier authentication (This is the usual location for this)


§         Basic HTTP – the web server authenticates a principal with user name & password from Web client

§         Form-based – lets developers customize the authentication user

§         HTTPS mutual authentication – the client and server use X.509 certificates to establish identity over a SSL channel.


EJB/EIS tier authentication


§         For EJBs can use protection domains. Thus the EJB tier could entrust the web tier to vouch for the identity of users.

§         Put a protected web resource in front of a protected EJB resource.

§         Have every web resource that calls an EJB resource route through a protected web resource

§         For access to EIS tier resources authentication is usually carried out by the component accessing the EIS resource.

§         You can have the container manage the EIS resource authentication or have the app do this itself.

§         Authorization

§         In J2EE a container serves as an authorization boundary between callers and its components. The authorization boundary is inside the authentication boundary so authorization occurs within the context of successful authentication.

§         For component to component invocations inside the container the calling component must make its credentials available to the called component.

§         You can have file-based & code-based security in J2EE.

§         Access control policy is set a deployment time.

§         Controlling access to resources in the container (deployment descriptor)

§         To control access to web resources, specify constraint in the deployment descriptor.

§         To control access to EJB, specify roles in the deployment descriptor.

§         You can specify methods of the remote & home interface that each security role is allowed to invoke

§         Protecting Messages

§         To ensure message integrity you can use:

§         Message signature – a enciphered digest of the message contents (costly in terms of CPU cycles)

§         Message confounder – ensures message authentication is useful only once

§         A deployer must configure the containers involved in a call to implement integrity mechanisms either because the call will traverse open or unprotected networks or because the call will be made between components that do not trust each other.



When security is breached it is usually more important to know who has been allowed access than who has not. Audit records need to be well protected – tapes or logging to a printer vs disk drive


Communication security threats

Eavesdropping: Information is kept intact, but privacy is compromised.

Tampering: Information in transit is changed or replaced and then sent on.

Impersonation: Information passes to the wrong person, who poses as the intended recipient. Persons or systems can pretend to be somebody else (Spoofing) or something else (Misrepresentation).


Goals of cryptography

Privacy: Communication between two parties is unintelligible to an intruder.

Tamper detection: Verify that information has not been changed in transit.

Authentication: Confirm that the sender of the information is who he claims to be.

Non-repudiation: Prevents senders from claiming at a later date the information was never sent.


Symmetric Key Cryptography

One key is used for encryption and decryption.

Problem is key exchange between remote parties. Advantage is efficient implementation.


Public-Key Cryptography

Also called “ asymmetric cryptography”. Pair of two associated keys: The private key is kept secret; the public key is published openly. Encryption with public key ensures privacy.


Digital signatures


Message digest or one-way hash function: Number of fixed bit length that changes with every subtle change in the message and cannot be reversed.


Digital signature: Encryption of the message digest with the public key.

Method: Send digital signature along with message. Receiver calculates message digest of the received message and compares with the decrypted message digest.




Problem: Verify validity of a person’s public key.


Certificate: Electronic document that identifies an individual, a server, a company, or some other entity and associates the entity with a public key.


Certification authority (CA): Validates identity and issues certificates.

Certificates contain identity name, expiration date, CA name and digital signature, and other information like a serial number.


Authentication: Process of confirming an identity. In general, can be based on assets (ID cards, private key file), knowledge (ID and password), or individual properties (biometrics).


Network authentication can be password-based or certificate-based. Advantage of certificates: Password does not need to be sent across the network. Single-sign on is easier.


X.509 v3: ITU Standard for certificate content. X.509 v3 binds a distinguished name (DN) to a public key. DN is a series of name-value pairs, e.g. uid=doe,,cn=John Doe,o=ABC,c=US, like in LDAP.


X.509 Data Section: 1) Version no of X.509 standard 2) Serial number (unique to CA) 3) Information 4) Public Key Info: Algorithm and Data 5) Validity Period 6) Subject’s DN 7) Optional extensions


X.509 Signature Section: 1) Algorithm or cipher used by the CA 2) Digital signature of certificate by CA


Certificate chains: Hierarchy of CA’s, parents certifying children. Allows verifying certificates by just having the trusted public key of the root CA.


Public Key Infrastructure (PKI): Set of standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates.

Key recovery or key escrow: Ability to retrieve backups of private keys under certain, carefully defined conditions. Example is the m-of-n principle, m of n trusted individuals have to agree.


Public Key Cryptography Standards (PKCS): Set of standards driven by RSA Labs with broad industry support. Includes RSA public-key and Diffie-Hellman algorithms.


Java Security Model


Security architecture has changed from JDK 1.1 to 1.2. In JDK 1.1, the ‘sandbox model’ distinguished between remote code (applets) that was run with security restrictions in the sandbox, whereas local code was totally trusted. With Java 2, all application code is subject to security control.

The Java language supports writing safe code: Strong type checking, automatic memory management, range checking, JRE bytecode verifier.

The Java 2 platform explicitly supports access control to various important resources like files and sockets through permissions and the SecurityManager.


Protection domain: Conceptual set of resources that are currently available to a principal.

Permissions are resolved by checking the protection domain assigned to a class.

Two main protection domains: Application and system.




Permission: Abstract base class for all permissions, e.g.


PermissionCollection: List of permissions of same type. Permissions: List of PCs.

Types of permissions: (Basic), File, Socket, Property, Property, Runtime, AWT, Net, Security, All

Defining new permissions: Descend from Permission, implement implies method.


CodeSource: Defines URL of a remote codebase along with associated certificates.

Policy: The current security policy. Policy.getPolicy(), getPermissions(CodeSource)


ProtectionDomain: Uniquely identified by a CodeSource. A class belongs to exactly one PD.


AccessController: Has checkPermission method (also called by SecurityManager)


AccessControlContext: Defines state of access control, used to pass access info between threads.

Principal: Interface for entities like persons, companies, used in X.509 certificates





Create public/private key pairs, issue cert requests (for sending to CA), manage key store.



Signing and verifying JAR files. “jarsigner [options] jar-file alias”





Can be found in {JDK Home}/lib/security – Security Properties File

Java.policy – System Policy File

Cacerts – Certificates Keystore File

Java Cryptography Architecture (JCA)


Engine classes: MessageDigest, Signature, KeyPairGenerator, KeyStore, SecureRandom etc.

Interface are implemented by providers.

Default provider SUN is shipped with Sun’s JDK 1.2, includes DSA, MD5, SHA-1, X.509 certs, JKS.


Additional Java Security Services

Java Authentication and Authorization Service (JAAS) 1.0

Supports access control based on who runs the code. J2SE only supports access control based on where code is coming from and who signed it.

Implements Pluggable Authentication Module (PAM).

Principal: Named entity or account. (Roles and Groups are Principals as well.)

Subject: Person or Service using a system. Represented by a set of Principals.

Credential: Additional security info about a subject, e.g. certificate, password. Public/Private

Two-phase login like two-phase commit.

Authorization: Policy files like Java 2 policies, with new principal key word.


Java Cryptography Extension (JCE) 1.2.1

Extensions to the Java Cryptography Architecture (JCA), that is part of J2SE.


Java Secure Sockets Extension (JSSE) 1.0.2

Provides SSL (Secure Sockets Layer) 3.0 and TLS (Transport Layer Security) 1.0.

SSLSocket and SSLServerSocket classes

HTTPS support




A firewall is a system that enforces an access control policy between two networks.

Purpose of firewalls: Control incoming and outgoing travel, address translation, monitoring.

Screening types: Check whether data have been requested, check sender, check content.

Network Address Translation (NAT): Make it seem like all outgoing traffic originates from the firewall.

Attack types: Information theft, Information sabotage, Denial of service.

Firewalls are certified by the ICSA (International Computer Security Agency).


ICSA firewall categories

Packet filter firewalls: Based on information in single IP packets; port, source/dest IP address

Application-level proxy servers: Inspection of data for specific services, e.g. HTTP, FTP, SMTP

Stateful packet inspection firewalls: Stores requests and checks incoming data against them.


Additional support might be provided for:

De-militarised zones (DMZ): Zone for protected, controlled public access, separate from intranet.

Virtual Private Networks (VPN). VPN’s are more cost-effective than leased lines or dial-in.

Virus detection

Load Balancing


Firewall implementation types

Router/Firmware-based firewalls: Built into router. Usually limited capabilities available.

Software-based firewalls: Sophisticated applications on a workstation. Difficult to maintain.

Dedicated firewall appliances: Best solution. High throughput, less maintenance, more security.



Enterprise Java Beans
Richard Monson-Haefel
Mastering EJB II
Ed Roman
Design Patterns
Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, Grady Booch 
UML Distilled
Martin Fowler
Java 2 Network Security 
Marco Pistoia, Duane  F. Reller, Deepak Gupta, Milind Nagnur, Ashok K. Ramani
Java Enterprise in a Nutshell 
David Flanagan, Jim Farley, William Crawford, Kris Magnusson
Design Patterns and Contracts  
Jean-Marc Jezequel, Michel Train, Christine mingins, 


1. How is database middleware used to access legacy databases?

Database middleware enables legacy databases to be accessed from Java by

translating between JDBC and the drivers that are supported by the legacy databases.


2. What three types of components comprise an application design?

An application design is comprised of legacy components, vendor products, and developmental software.


3. What is the publish/subscribe model?

The publish/subscribe model is an approach to distributed system communication in which publishers publish information to a subject address and subscribers subscribe to information at a subject address. The publish/subscribe model has the benefit of making publishers independent of location. This enables subscribers to subscribe to information without having

to know the location of a publisher.


4. What is an off-board server?

An off-board server is a server that executes as a proxy for a legacy system.

It communicates with the legacy system using the custom protocols supported by the legacy system. It communicates with external applications using industry-standard protocols.


5. What is a message digest?

A message digest is a value that is computed from a message, file or other byte stream that serves as a digital fingerprint for the byte stream. Message digests are computed using one-way functions.


6. What is the purpose of HTTP tunneling?

HTTP tunneling is used to encapsulate other protocols within the HTTP or HTTPS protocols. It is typically used to pass protocols that would normally be blocked by a firewall through the firewall in a controlled manner.


7. What are the phases of the object-oriented development lifecycle?

The phases of the object-oriented development lifecycle are problem statement, object-oriented analysis, Java architecture design, object-oriented design, and object generation.


8. What is the purpose of JNDI?

JNDI provides a platform-independent Java interface to naming and directory services, such as LDAP, NDS, and Active Directory.


9. What is IIOP used for?

IIOP is used to support communication between object request brokers via TCP/IP.


10. What is a digital certificate?

A digital certificate is a message that is signed by a certification authority that certifies the value of a person or organization's public key.


11. How has the sandbox changed with Java 2?

Java 2 provides the capability to specify a security policy that determines the

access that an applet or application is allowed based on its source and the

identities of those who have signed the code.


12. What are the advantages of asynchronous architectures?

Asynchronous architectures decouple senders and receivers. This brings about performance advantages for both the sender and the receiver. The sender is able to even out his communication traffic over the course of a day. This is helpful in cases where sender and receiver communicate over low-bandwidth lengths. The receiver can even out its processing load by processing the sender's message as time permits.


13. What is a virtual private network?

A virtual private network is a network between geographically-dispersed sites that takes place over an un-trusted network. Encryption and authentication mechanisms are used to secure data that is transmitted over the un-trusted network.


14. What advantages do servlets have over CGI programs?

Servlets are written in Java and are platform-independent. Servlets run under the JVM and may be secured using the Java sandbox. Servlets run as threads and may be preloaded to improve their performance.


15. What is OQL?

OQL is a database query language that is based on SQL and supports the adding, retrieving, querying, and invocation of objects.


16. What is a certification authority?

A certification authority is an organization that is trusted to verify the public keys of other organizations and individuals. Certification authorities issue digital certificates that verify the public keys of these entities.


17. How is JNI used to access legacy system software?

JNI is used to write custom code to interface Java objects with legacy software that does not support standard communication interfaces.


18. What is the difference between a component and an object?

A component is an object that has been created, initialized, and serialized. A component may be used by deserializing it.


19. What does a deployment diagram specify?

A deployment diagram identifies the physical elements (processing nodes) of a system, communication links between nodes, and the mapping of software components to these elements.


20. What is the Common Gateway Interface?

The Common Gateway Interface is a standard for communication between Web servers and external programs.


21. How does legacy object mapping work?

Legacy object mapping builds object wrappers around legacy system interfaces in order to access elements of legacy system business logic and database tiers directly. Legacy object mapping tools are used to create proxy objects that access legacy system functions and make them available in an object-oriented manner.


22. What is the purpose of a use case diagram?

A use case diagram describes the users of a system and the functions and services that are provided to the users.


23. In what cases are synchronous architectures more appropriate than asynchronous architectures?

Synchronous architectures are more appropriate than asynchronous architectures in applications where the sender and receiver must participate in a message exchange, and the sender must respond to the receiver in a limited time frame. An example of this is credit card authorization. The sender needs a response within a short time to complete a electronic commerce transaction and to notify a user that his purchase has been completed.


24. What is the purpose of a class diagram?

A class diagram identifies classes, relationships between classes, attributes, and methods.


25. What is the purpose of a transaction monitor?

Transaction monitors are programs that monitor transactions to ensure that they are completed in a successful manner. They ensure that successful transactions are committed and that unsuccessful transactions are aborted.


26. How is Java-to-Com bridging used to access COM objects?

A Java-to-COM bridge enables COM objects to be accessed as Java classes and Java classes to be accessed as COM objects.


27. What is IPSec?

IPSec is a set of IP extensions that provide security services, such as encryption, authentication, and data integrity. IPSec is typically used with a VPN.


28. What is a screen scraper?

A screen scraper is a software application that translates an existing client interface into a set of objects that can be used to build new client software.


29. What is the principle of least privilege?

The principle of least privilege requires that an application be given only those privileges that it needs to carry out its function and no more.


30. What is Secure Sockets Layer (SSL)?

SSL is a protocol that sits between the transmission control protocol and application layer protocols. It provides authentication and encryption services to the application layer protocols.


31. What is the purpose of a firewall?

Firewalls are used to mediate and control all information that is communicated between an external (untrusted) network and an internal (trusted) network. Firewalls make use of IP filtering and application proxies to implement firewall security policies.


32. What is the content of the Java 2 security policy file?

The security policy file contains a series of grant entries that identify the permissions granted to an applet or application based on its source and signatures.


33. What are the advantages of thin clients?

Thin clients separate the client tier from the business logic and data storage tiers. This enables applications to be distributed, scaled, upgraded, managed, and maintained more easily.


34. In which application lifecycle phases is an application architecture produced?

Application architectures may be produced during requirements analysis. However, an application's architecture is not formalized until design. An architecture may be updated based on problems or opportunities that are encountered in subsequent lifecycle phases.


35. What is the applet sandbox?

The applet sandbox is a mechanism by which all applets that are loaded over a network are prevented from accessing security-sensitive resources, such as the local file system and networking resources.


36. What is the connection keep-alive feature of HTTP 1.1?

HTTP 1.1's connection keep-alive feature allows the TCP connection between a browser and a Web server to remain open throughout multiple HTTP requests and responses. This significantly improves the overall performance of browser-server communication.




1. What are the advantage of VPN?

Cost lower

Make use of exsting network connectivity

Support data encryption, integration and authentification

Java based security soliution.


Answer: ABC (A because of  B. C is true and D is not true.)


2. X. 509 Version support which of the following:

format and certification for digit certification

The IPSpec standard.


The data encryption standard.


Answer: A is correct.


3. Suppose a small company has a char-terminal-base legacy application that it wants to make available over the Web. However, it does not want to modify the legacy application in order to support Web connectivity. Which technologies are appropriate to accomplish these goals?


Off-board servers

Screen scrapers.




Answer: ABC


4. Which diagrams map software components to processing nodes?

Component diagram

Collaboration diagram

Deployment diagram

Object diagram


Answer: C


5. Which technologies are effective in securing legacy systems?


Virtual private networks

Screen scrapers

Java RMI


Answer: A, B


6. Which features were introducted with Java 2?

The capability to sign Jar files.

The applet sandbox.

The capability to specify an applet security policy

Support for X. 509 version 3 certifications.


Answer: C and D


7. Which of the following are characteristics of HTTP tunneling?

It uses the hypertext transfer protocol.

It is used to pass other protocols through a firewall.

It is part of the Java 2 API.

It is used to sign JAR files.


Answer: A, B


8. Which of the following are contained in a Java securing policy file?

grant entries

trusted code

aliases and their public keys

digital certificates.


Answer: A


9. Which of the following security features are provided by digital signatures?






Answer: B, C, D


10. Which of the following capabilities are provided by SSL?

For a client to authenticate a server

For a server to authenticate a client

Mediate and control all communication between an internal (trusted) network and an external (untrusted) network

For a client and a server to encrypt their communication using a selectable encryption algorithm


Answer: A, B, D. C is provided by a firewall.




11. Suppose that the business logic of an existing application is implemented using a set of CGI programs. Which Java technologies can be used to implement the CGI programs as a Java-based solution.


Screen scrapers.

Enterprise JavaBeans



Answer: C, D


12. Which protocols do ORBS from different vendors use to communicate?






Answer: B


13. Which of the following are contained in a keystore?

Grant entries

Trusted code

Aliases and their public keys

Digital certificates


Answer: C, D


14. Which of the following are phases of thbe OO development cycle?

OO analysis

Object generation

Problem statement

Development testing


Answer: A, B, C


15, What are the primary differences between the use object mapping and screen scrapers in providing access to legacy systems?

Object mapping is less prone to fail because of anomalies in the legacy system user interface.

Screen scrapers appear as display terminals to legacy systems and object mappers generally do not

Screen scrapers execute on character-display terminals and object mapping tools do not.

Screen scrapers are more secure than object mapping tools.


Answer: A, B


16. Which java benchmark measures applet performance






Answer: D


17. Which is potential advantage of a  one-tier application architecture?






Answer: A


18. Which of the following are true about the Java cryptography Extension?

It is included with the Java 2 Platform

It implements cryptographic algorithm

It is subject to US export controls

It contains the Java Security API.


Answer: B, C


19. Which are victim hosts located with respect to a firewall?

In the external untrusted network

In the internal trusted network

In the DMZ

In the application proxy.


Answer: C


20. Which are true about message digests?

They are calculated using an encryption algorithm

They are used as a digital fingerprint for messages and files

They summaries the content of a message

They are computed using one-way functions.


Answer: B, D


21. Which of the following are characteristics of a publish/subscribe architecture?

Reliance on the use of URLs to identify publishers

Subject-based addressing

Location-independent publishers

Synchronized communication between publishers and subcribers.


Answer: B, C


22. Which is for managing enterprise services, systems and networks.






Answer: A