[ Pobierz całość w formacie PDF ]
.geography, 25);System.out.println(m);monitor.expect(new Object[] {"%%\\{((ERITREA=Asmara|CAPE "+ "VERDE=Praia|GABON=Libreville|BENIN=Porto-Novo|"+ "THE GAMBIA=Banjul|ETHIOPIA=Addis Ababa|"+ "EGYPT=Cairo|CHAD=N'djamena|GUINEA=-|"+ "DJIBOUTI=Dijibouti|CETE D'IVOIR \\(IVORY "+ "COAST\\)=Yamoussoukro|KENYA=Nairobi|CENTRAL"+ " AFRICAN REPUBLIC=Bangui|ALGERIA=Algiers|"+ "CAMEROON=Yaounde|BURKINA FASO=Ouagadougou|"+ "EQUATORIAL GUINEA=Malabo|GHANA=Accra|"+ "COMOROS=Moroni|BISSAU=Bissau|BURUNDI=Bujumbura|"+ "CONGO=Brazzaville|ANGOLA=Luanda|"+ "BOTSWANA=Gaberone)(, ){0,1}){24}\\}"});}} ///:~Because the slots in a hash table are often referred to as buckets, the array that represents the actual table is called bucket.To promote even distribution, the number of buckets is typically a prime number.Notice that it is an array of LinkedList, which automatically provides for collisionseach new item is simply added to the end of the list.The return value of put( ) is null or, if the key was already in the list, the old value associated with that key.The return value is result, which is initialized to null, but if a key is discovered in the list then result is assigned to that key.For both put( ) and get( ), the first thing that happens is that the hashCode( ) is called for the key, and the result is forced to a positive number.Then it is forced to fit into the bucket array using the modulus operator and the size of the array.If that location is null, it means there are no elements that hash to that location, so a new LinkedList is created to hold the object that just did.However, the normal process is to look through the list to see if there are duplicates, and if there are, the old value is put into result and the new value replaces the old.The found flag keeps track of whether an old key-value pair was found and, if not, the new pair is appended to the end of the list.In get( ), youll see very similar code as that contained in put( ), but simpler.The index is calculated into the bucket array, and if a LinkedList exists it is searched for a match.entrySet( ) must find and traverse all the lists, adding them to the result Set.Once this method has been created, the Map can be tested by filling it with values and then printing them.HashMap performance factorsTo understand the issues, some terminology is necessary:Capacity: The number of buckets in the table.Initial capacity: The number of buckets when the table is created.HashMap and HashSet: have constructors that allow you to specify the initial capacity.Size: The number of entries currently in the table.The default load factor used by HashMap is 0.75 (it doesnt rehash until the table is ¾ full).This seems to be a good trade-off between time and space costs.A higher load factor decreases the space required by the table but increases the lookup cost, which is important because lookup is what you do most of the time (including both get( ) and put( )).If you know that youll be storing many entries in a HashMap, creating it with an appropriately large initial capacity will prevent the overhead of automatic rehashing.OverridinghashCode( )Now that you understand whats involved in the function of the HashMap, the issues involved in writing aFirst of all, you dont have control of the creation of the actual value thats used to index into the array of buckets.That is dependent on the capacity of the particular HashMap object, and that capacity changes depending on how full the container is, and what the load factor is.The value produced by your hashCode( ) will be further processed in order to create the bucket index (in SimpleHashMap the calculation is just a modulo by the size of the bucket array).The most important factor in creating a hashCode( ) is that, regardless of when hashCode( ) is called, it produces the same value for a particular object every time it is called.If you end up with an object that produces one hashCode( ) value when it is put( ) into a HashMap, and another during a get( ), you wont be able to retrieve the objects.So if your hashCode( ) depends on mutable data in the object the user must be made aware that changing the data will effectively produce a different key by generating a different hashCode( ).In addition, you will probably not want to generate a hashCode( ) that is based on unique object informationin particular, the value of this makes a bad hashCode( ) because then you cant generate a new identical key to the one used to put( ) the original key-value pair.This was the problem that occurred in SpringDetector.java because the default implementation of hashCode( ) does use the object address.So youll want to use information in the object that identifies the object in a meaningful way.One example is found in the String class.Strings have the special characteristic that if a program has several String objects that contain identical character sequences, then those String objects all map to the same memory (the mechanism for this is described in Appendix A).So it makes sense that the hashCode( ) produced by two separate instances of new String(hello) should be identical.You can see it by running this program://: c09:StringHashCode.javaimport com.bruceeckel.simpletest.*;public class StringHashCode {static Test monitor = new Test();public static void main(String[] args) {System.out
[ Pobierz całość w formacie PDF ]