java 深度复制对象

简介:

Java中如何深度拷贝对象呢?

The java.lang.Object root superclass defines a clone() method that will, assuming the subclass implements the java.lang.Cloneable interface, return a copy of the object. While Java classes are free to override this method to do more complex kinds of cloning, the default behavior of clone() is to return a shallow copy of the object. This means that the values of all of the origical object’s fields are copied to the fields of the new object.

 




 

A property of shallow copies is that fields that refer to other objects will point to the same objects in both the original and the clone. For fields that contain primitive or immutable values (intString,float, etc…), there is little chance of this causing problems. For mutable objects, however, cloning can lead to unexpected results. Figure 1 shows an example.

Java代码   收藏代码
  1. import java.io.IOException;  
  2. import java.io.ByteArrayInputStream;  
  3. import java.io.ByteArrayOutputStream;  
  4. import java.io.ObjectOutputStream;  
  5. import java.io.ObjectInputStream;  
  6.   
  7. /** 
  8.  * Utility for making deep copies (vs. clone()'s shallow copies) of  
  9.  * objects. Objects are first serialized and then deserialized. Error 
  10.  * checking is fairly minimal in this implementation. If an object is 
  11.  * encountered that cannot be serialized (or that references an object 
  12.  * that cannot be serialized) an error is printed to System.err and 
  13.  * null is returned. Depending on your specific application, it might 
  14.  * make more sense to have copy(...) re-throw the exception. 
  15.  * 
  16.  * A later version of this class includes some minor optimizations. 
  17.  */  
  18. public class UnoptimizedDeepCopy {  
  19.   
  20.     /** 
  21.      * Returns a copy of the object, or null if the object cannot 
  22.      * be serialized. 
  23.      */  
  24.     public static Object copy(Object orig) {  
  25.         Object obj = null;  
  26.         try {  
  27.             // Write the object out to a byte array  
  28.             ByteArrayOutputStream bos = new ByteArrayOutputStream();  
  29.             ObjectOutputStream out = new ObjectOutputStream(bos);  
  30.             out.writeObject(orig);  
  31.             out.flush();  
  32.             out.close();  
  33.   
  34.             // Make an input stream from the byte array and read  
  35.             // a copy of the object back in.  
  36.             ObjectInputStream in = new ObjectInputStream(  
  37.                 new ByteArrayInputStream(bos.toByteArray()));  
  38.             obj = in.readObject();  
  39.         }  
  40.         catch(IOException e) {  
  41.             e.printStackTrace();  
  42.         }  
  43.         catch(ClassNotFoundException cnfe) {  
  44.             cnfe.printStackTrace();  
  45.         }  
  46.         return obj;  
  47.     }  
  48.   
  49. }  

 Unfortunately, this approach has some problems, too:

 

  1. It will only work when the object being copied, as well as all of the other objects references directly or indirectly by the object, are serializable. (In other words, they must implementjava.io.Serializable.) Fortunately it is often sufficient to simply declare that a given classimplements java.io.Serializable and let Java’s default serialization mechanisms do their thing.
  2. Java Object Serialization is slow, and using it to make a deep copy requires both serializing and deserializing. There are ways to speed it up (e.g., by pre-computing serial version ids and defining custom readObject() and writeObject() methods), but this will usually be the primary bottleneck.
  3. The byte array stream implementations included in the java.io package are designed to be general enough to perform reasonable well for data of different sizes and to be safe to use in a multi-threaded environment. These characteristics, however, slow downByteArrayOutputStream and (to a lesser extent) ByteArrayInputStream.

 

The first two of these problems cannot be addressed in a general way. We can, however, use alternative implementations of ByteArrayOutputStream and ByteArrayInputStream that makes three simple optimizations:

 

  1. ByteArrayOutputStream, by default, begins with a 32 byte array for the output. As content is written to the stream, the required size of the content is computed and (if necessary), the array is expanded to the greater of the required size or twice the current size. JOS produces output that is somewhat bloated (for example, fully qualifies path names are included in uncompressed string form), so the 32 byte default starting size means that lots of small arrays are created, copied into, and thrown away as data is written. This has an easy fix: construct the array with a larger inital size.
  2. All of the methods of ByteArrayOutputStream that modify the contents of the byte array aresynchronized. In general this is a good idea, but in this case we can be certain that only a single thread will ever be accessing the stream. Removing the synchronization will speed things up a little. ByteArrayInputStream‘s methods are also synchronized.
  3. The toByteArray() method creates and returns a copy of the stream’s byte array. Again, this is usually a good idea: If you retrieve the byte array and then continue writing to the stream, the retrieved byte array should not change. For this case, however, creating another byte array and copying into it merely wastes cycles and makes extra work for the garbage collector.

An optimized implementation of ByteArrayOutputStream is shown in Figure 4.

Java代码   收藏代码
  1. import java.io.OutputStream;  
  2. import java.io.IOException;  
  3. import java.io.InputStream;  
  4. import java.io.ByteArrayInputStream;  
  5.   
  6. /** 
  7.  * ByteArrayOutputStream implementation that doesn't synchronize methods 
  8.  * and doesn't copy the data on toByteArray(). 
  9.  */  
  10. public class FastByteArrayOutputStream extends OutputStream {  
  11.     /** 
  12.      * Buffer and size 
  13.      */  
  14.     protected byte[] buf = null;  
  15.     protected int size = 0;  
  16.   
  17.     /** 
  18.      * Constructs a stream with buffer capacity size 5K  
  19.      */  
  20.     public FastByteArrayOutputStream() {  
  21.         this(5 * 1024);  
  22.     }  
  23.   
  24.     /** 
  25.      * Constructs a stream with the given initial size 
  26.      */  
  27.     public FastByteArrayOutputStream(int initSize) {  
  28.         this.size = 0;  
  29.         this.buf = new byte[initSize];  
  30.     }  
  31.   
  32.     /** 
  33.      * Ensures that we have a large enough buffer for the given size. 
  34.      */  
  35.     private void verifyBufferSize(int sz) {  
  36.         if (sz > buf.length) {  
  37.             byte[] old = buf;  
  38.             buf = new byte[Math.max(sz, 2 * buf.length )];  
  39.             System.arraycopy(old, 0, buf, 0, old.length);  
  40.             old = null;  
  41.         }  
  42.     }  
  43.   
  44.     public int getSize() {  
  45.         return size;  
  46.     }  
  47.   
  48.     /** 
  49.      * Returns the byte array containing the written data. Note that this 
  50.      * array will almost always be larger than the amount of data actually 
  51.      * written. 
  52.      */  
  53.     public byte[] getByteArray() {  
  54.         return buf;  
  55.     }  
  56.   
  57.     public final void write(byte b[]) {  
  58.         verifyBufferSize(size + b.length);  
  59.         System.arraycopy(b, 0, buf, size, b.length);  
  60.         size += b.length;  
  61.     }  
  62.   
  63.     public final void write(byte b[], int off, int len) {  
  64.         verifyBufferSize(size + len);  
  65.         System.arraycopy(b, off, buf, size, len);  
  66.         size += len;  
  67.     }  
  68.   
  69.     public final void write(int b) {  
  70.         verifyBufferSize(size + 1);  
  71.         buf[size++] = (byte) b;  
  72.     }  
  73.   
  74.     public void reset() {  
  75.         size = 0;  
  76.     }  
  77.   
  78.     /** 
  79.      * Returns a ByteArrayInputStream for reading back the written data 
  80.      */  
  81.     public InputStream getInputStream() {  
  82.         return new FastByteArrayInputStream(buf, size);  
  83.     }  
  84.   
  85. }  

 

Figure 4. Optimized version of  ByteArrayOutputStream

 

The getInputStream() method returns an instance of an optimized version ofByteArrayInputStream that has unsychronized methods. The implementation ofFastByteArrayInputStream is shown in Figure 5.

Java代码   收藏代码
  1. import java.io.InputStream;  
  2. import java.io.IOException;  
  3.   
  4. /** 
  5.  * ByteArrayInputStream implementation that does not synchronize methods. 
  6.  */  
  7. public class FastByteArrayInputStream extends InputStream {  
  8.     /** 
  9.      * Our byte buffer 
  10.      */  
  11.     protected byte[] buf = null;  
  12.   
  13.     /** 
  14.      * Number of bytes that we can read from the buffer 
  15.      */  
  16.     protected int count = 0;  
  17.   
  18.     /** 
  19.      * Number of bytes that have been read from the buffer 
  20.      */  
  21.     protected int pos = 0;  
  22.   
  23.     public FastByteArrayInputStream(byte[] buf, int count) {  
  24.         this.buf = buf;  
  25.         this.count = count;  
  26.     }  
  27.   
  28.     public final int available() {  
  29.         return count - pos;  
  30.     }  
  31.   
  32.     public final int read() {  
  33.         return (pos < count) ? (buf[pos++] & 0xff) : -1;  
  34.     }  
  35.   
  36.     public final int read(byte[] b, int off, int len) {  
  37.         if (pos >= count)  
  38.             return -1;  
  39.   
  40.         if ((pos + len) > count)  
  41.             len = (count - pos);  
  42.   
  43.         System.arraycopy(buf, pos, b, off, len);  
  44.         pos += len;  
  45.         return len;  
  46.     }  
  47.   
  48.     public final long skip(long n) {  
  49.         if ((pos + n) > count)  
  50.             n = count - pos;  
  51.         if (n < 0)  
  52.             return 0;  
  53.         pos += n;  
  54.         return n;  
  55.     }  
  56.   
  57. }  

 Figure 5. Optimized version of ByteArrayInputStream.

 

Figure 6 shows a version of a deep copy utility that uses these classes:

Java代码   收藏代码
  1. import java.io.IOException;  
  2. import java.io.ByteArrayInputStream;  
  3. import java.io.ByteArrayOutputStream;  
  4. import java.io.ObjectOutputStream;  
  5. import java.io.ObjectInputStream;  
  6.   
  7. /** 
  8.  * Utility for making deep copies (vs. clone()'s shallow copies) of  
  9.  * objects. Objects are first serialized and then deserialized. Error 
  10.  * checking is fairly minimal in this implementation. If an object is 
  11.  * encountered that cannot be serialized (or that references an object 
  12.  * that cannot be serialized) an error is printed to System.err and 
  13.  * null is returned. Depending on your specific application, it might 
  14.  * make more sense to have copy(...) re-throw the exception. 
  15.  */  
  16. public class DeepCopy {  
  17.   
  18.     /** 
  19.      * Returns a copy of the object, or null if the object cannot 
  20.      * be serialized. 
  21.      */  
  22.     public static Object copy(Object orig) {  
  23.         Object obj = null;  
  24.         try {  
  25.             // Write the object out to a byte array  
  26.             FastByteArrayOutputStream fbos =   
  27.                     new FastByteArrayOutputStream();  
  28.             ObjectOutputStream out = new ObjectOutputStream(fbos);  
  29.             out.writeObject(orig);  
  30.             out.flush();  
  31.             out.close();  
  32.   
  33.             // Retrieve an input stream from the byte array and read  
  34.             // a copy of the object back in.   
  35.             ObjectInputStream in =   
  36.                 new ObjectInputStream(fbos.getInputStream());  
  37.             obj = in.readObject();  
  38.         }  
  39.         catch(IOException e) {  
  40.             e.printStackTrace();  
  41.         }  
  42.         catch(ClassNotFoundException cnfe) {  
  43.             cnfe.printStackTrace();  
  44.         }  
  45.         return obj;  
  46.     }  
  47.   
  48. }  

 

Figure 6. Deep-copy implementation using optimized byte array streams

 

The extent of the speed boost will depend on a number of factors in your specific application (more on this later), but the simple class shown in Figure 7 tests the optimized and unoptimized versions of the deep copy utility by repeatedly copying a large object.

 

Java代码   收藏代码
  1. import java.util.Hashtable;  
  2. import java.util.Vector;  
  3. import java.util.Date;  
  4.   
  5. public class SpeedTest {  
  6.   
  7.     public static void main(String[] args) {  
  8.         // Make a reasonable large test object. Note that this doesn't  
  9.         // do anything useful -- it is simply intended to be large, have  
  10.         // several levels of references, and be somewhat random. We start  
  11.         // with a hashtable and add vectors to it, where each element in  
  12.         // the vector is a Date object (initialized to the current time),  
  13.         // a semi-random string, and a (circular) reference back to the  
  14.         // object itself. In this case the resulting object produces  
  15.         // a serialized representation that is approximate 700K.  
  16.         Hashtable obj = new Hashtable();  
  17.         for (int i = 0; i < 100; i++) {  
  18.             Vector v = new Vector();  
  19.             for (int j = 0; j < 100; j++) {  
  20.                 v.addElement(new Object[] {   
  21.                     new Date(),   
  22.                     "A random number: " + Math.random(),  
  23.                     obj  
  24.                  });  
  25.             }  
  26.             obj.put(new Integer(i), v);  
  27.         }   
  28.   
  29.         int iterations = 10;  
  30.   
  31.         // Make copies of the object using the unoptimized version  
  32.         // of the deep copy utility.  
  33.         long unoptimizedTime = 0L;  
  34.         for (int i = 0; i < iterations; i++) {  
  35.             long start = System.currentTimeMillis();  
  36.             Object copy = UnoptimizedDeepCopy.copy(obj);  
  37.             unoptimizedTime += (System.currentTimeMillis() - start);  
  38.   
  39.             // Avoid having GC run while we are timing...  
  40.             copy = null;  
  41.             System.gc();  
  42.         }  
  43.   
  44.   
  45.         // Repeat with the optimized version  
  46.         long optimizedTime = 0L;  
  47.         for (int i = 0; i < iterations; i++) {  
  48.             long start = System.currentTimeMillis();  
  49.             Object copy = DeepCopy.copy(obj);  
  50.             optimizedTime += (System.currentTimeMillis() - start);  
  51.   
  52.             // Avoid having GC run while we are timing...  
  53.             copy = null;  
  54.             System.gc();  
  55.         }  
  56.   
  57.         System.out.println("Unoptimized time: " + unoptimizedTime);  
  58.         System.out.println("  Optimized time: " + optimizedTime);  
  59.     }  
  60.   
  61. }  

 Figure 7. Testing the two deep copy implementations.

 

A few notes about this test:

 

  • The object that we are copying is large. While somewhat random, it will generally have a serialized size of around 700 Kbytes.
  • The most significant speed boost comes from avoid extra copying of data inFastByteArrayOutputStream. This has several implications:

    1. Using the unsynchronized FastByteArrayInputStream speeds things up a little, but the standard java.io.ByteArrayInputStream is nearly as fast.
    2. Performance is mildly sensitive to the initial buffer size in FastByteArrayOutputStream, but is much more sensitive to the rate at which the buffer grows. If the objects you are copying tend to be of similar size, copying will be much faster if you initialize the buffer size and tweak the rate of growth.
  • Measuring speed using elapsed time between two calls to System.currentTimeMillis() is problematic, but for single-threaded applications and testing relatively slow operations it is sufficient. A number of commercial tools (such as JProfiler) will give more accurate per-method timing data.
  • Testing code in a loop is also problematic, since the first few iterations will be slower until HotSpot decides to compile the code. Testing larger numbers of iterations aleviates this problems.
  • Garbage collection further complicates matters, particularly in cases where lots of memory is allocated. In this example, we manually invoke the garbage collector after each copy to try to keep it from running while a copy is in progress.

 

These caveats aside, the performance difference is sigificant. For example, the code as shown in Figure 7 (on a 500Mhz G3 Macintosh iBook running OSX 10.3 and Java 1.4.1) reveals that the unoptimized version requires about 1.8 seconds per copy, while the optimized version only requires about 1.3 seconds. Whether or not this difference is signficant will, of course, depend on the frequency with which your application does deep copies and the size of the objects being copied.

For very large objects, an extension to this approach can reduce the peak memory footprint by serializing and deserializing in parallel threads. See "Low-Memory Deep Copy Technique for Java Objects" for more information.

 

转:http://javatechniques.com/blog/faster-deep-copies-of-java-objects/

参考:http://alvinalexander.com/java/java-deep-clone-example-source-code

相关文章
C4.
|
1月前
|
缓存 Java
Java的Integer对象
Java的Integer对象
C4.
11 0
|
1月前
|
Java
【Java每日一题】— —第二十三题:匿名对象及其使用问题
【Java每日一题】— —第二十三题:匿名对象及其使用问题
16 0
|
21天前
|
Java
java8中List对象转另一个List对象
java8中List对象转另一个List对象
36 0
|
1天前
|
Java
Java中如何克隆一个对象?
【4月更文挑战第13天】
10 0
|
3天前
|
Java API 数据库
深入解析:使用JPA进行Java对象关系映射的实践与应用
【4月更文挑战第17天】Java Persistence API (JPA) 是Java EE中的ORM规范,简化数据库操作,让开发者以面向对象方式处理数据,提高效率和代码可读性。它定义了Java对象与数据库表的映射,通过@Entity等注解标记实体类,如User类映射到users表。JPA提供持久化上下文和EntityManager,管理对象生命周期,支持Criteria API和JPQL进行数据库查询。同时,JPA包含事务管理功能,保证数据一致性。使用JPA能降低开发复杂性,但需根据项目需求灵活应用,结合框架如Spring Data JPA,进一步提升开发便捷性。
|
7天前
|
存储 Java 编译器
对象的交响曲:深入理解Java面向对象的绝妙之处
对象的交响曲:深入理解Java面向对象的绝妙之处
36 0
对象的交响曲:深入理解Java面向对象的绝妙之处
|
12天前
|
Java
在Java中,多态性允许不同类的对象对同一消息做出响应
【4月更文挑战第7天】在Java中,多态性允许不同类的对象对同一消息做出响应
17 2
|
21天前
|
Java
Java常用封装Base对象
Java常用封装Base对象
8 0
|
28天前
|
Java
【Java】通过Comparator比较器的方式给对象数组排序
【Java】通过Comparator比较器的方式给对象数组排序
9 0
|
28天前
|
Java
【Java】重写compareTo()方法给对象数组排序
【Java】重写compareTo()方法给对象数组排序
11 0