1 /* 2 * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved. 3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 * 5 * This code is free software; you can redistribute it and/or modify itA 6 * under the terms of the GNU General Public License version 2 only, as 7 * published by the Free Software Foundation. Oracle designates this 8 * particular file as subject to the "Classpath" exception as provided 9 * by Oracle in the LICENSE file that accompanied this code. 10 * 11 * This code is distributed in the hope that it will be useful, but WITHOUT 12 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 14 * version 2 for more details (a copy is included in the LICENSE file that 15 * accompanied this code). 16 * 17 * You should have received a copy of the GNU General Public License version 18 * 2 along with this work; if not, write to the Free Software Foundation, 19 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 20 * 21 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 22 * or visit www.oracle.com if you need additional information or have any 23 * questions. 24 */ 25 26 package java.util; 27 28 import java.io.IOException; 29 import java.io.InvalidObjectException; 30 import java.io.ObjectInputStream; 31 import java.io.Serializable; 32 import java.lang.reflect.ParameterizedType; 33 import java.lang.reflect.Type; 34 import java.util.function.BiConsumer; 35 import java.util.function.BiFunction; 36 import java.util.function.Consumer; 37 import java.util.function.Function; 38 39 /** 40 * Hash table based implementation of the {@code Map} interface. This 41 * implementation provides all of the optional map operations, and permits 42 * {@code null} values and the {@code null} key. (The {@code HashMap} 43 * class is roughly equivalent to {@code Hashtable}, except that it is 44 * unsynchronized and permits nulls.) This class makes no guarantees as to 45 * the order of the map; in particular, it does not guarantee that the order 46 * will remain constant over time. 47 * 48 * <p>This implementation provides constant-time performance for the basic 49 * operations ({@code get} and {@code put}), assuming the hash function 50 * disperses the elements properly among the buckets. Iteration over 51 * collection views requires time proportional to the "capacity" of the 52 * {@code HashMap} instance (the number of buckets) plus its size (the number 53 * of key-value mappings). Thus, it's very important not to set the initial 54 * capacity too high (or the load factor too low) if iteration performance is 55 * important. 56 * 57 * <p>An instance of {@code HashMap} has two parameters that affect its 58 * performance: <i>initial capacity</i> and <i>load factor</i>. The 59 * <i>capacity</i> is the number of buckets in the hash table, and the initial 60 * capacity is simply the capacity at the time the hash table is created. The 61 * <i>load factor</i> is a measure of how full the hash table is allowed to 62 * get before its capacity is automatically increased. When the number of 63 * entries in the hash table exceeds the product of the load factor and the 64 * current capacity, the hash table is <i>rehashed</i> (that is, internal data 65 * structures are rebuilt) so that the hash table has approximately twice the 66 * number of buckets. 67 * 68 * <p>As a general rule, the default load factor (.75) offers a good 69 * tradeoff between time and space costs. Higher values decrease the 70 * space overhead but increase the lookup cost (reflected in most of 71 * the operations of the {@code HashMap} class, including 72 * {@code get} and {@code put}). The expected number of entries in 73 * the map and its load factor should be taken into account when 74 * setting its initial capacity, so as to minimize the number of 75 * rehash operations. If the initial capacity is greater than the 76 * maximum number of entries divided by the load factor, no rehash 77 * operations will ever occur. 78 * 79 * <p>If many mappings are to be stored in a {@code HashMap} 80 * instance, creating it with a sufficiently large capacity will allow 81 * the mappings to be stored more efficiently than letting it perform 82 * automatic rehashing as needed to grow the table. Note that using 83 * many keys with the same {@code hashCode()} is a sure way to slow 84 * down performance of any hash table. To ameliorate impact, when keys 85 * are {@link Comparable}, this class may use comparison order among 86 * keys to help break ties. 87 * 88 * <p><strong>Note that this implementation is not synchronized.</strong> 89 * If multiple threads access a hash map concurrently, and at least one of 90 * the threads modifies the map structurally, it <i>must</i> be 91 * synchronized externally. (A structural modification is any operation 92 * that adds or deletes one or more mappings; merely changing the value 93 * associated with a key that an instance already contains is not a 94 * structural modification.) This is typically accomplished by 95 * synchronizing on some object that naturally encapsulates the map. 96 * 97 * If no such object exists, the map should be "wrapped" using the 98 * {@link Collections#synchronizedMap Collections.synchronizedMap} 99 * method. This is best done at creation time, to prevent accidental 100 * unsynchronized access to the map:<pre> 101 * Map m = Collections.synchronizedMap(new HashMap(...));</pre> 102 * 103 * <p>The iterators returned by all of this class's "collection view methods" 104 * are <i>fail-fast</i>: if the map is structurally modified at any time after 105 * the iterator is created, in any way except through the iterator's own 106 * {@code remove} method, the iterator will throw a 107 * {@link ConcurrentModificationException}. Thus, in the face of concurrent 108 * modification, the iterator fails quickly and cleanly, rather than risking 109 * arbitrary, non-deterministic behavior at an undetermined time in the 110 * future. 111 * 112 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed 113 * as it is, generally speaking, impossible to make any hard guarantees in the 114 * presence of unsynchronized concurrent modification. Fail-fast iterators 115 * throw {@code ConcurrentModificationException} on a best-effort basis. 116 * Therefore, it would be wrong to write a program that depended on this 117 * exception for its correctness: <i>the fail-fast behavior of iterators 118 * should be used only to detect bugs.</i> 119 * 120 * <p>This class is a member of the 121 * <a href="{@docRoot}/java.base/java/util/package-summary.html#CollectionsFramework"> 122 * Java Collections Framework</a>. 123 * 124 * @param <K> the type of keys maintained by this map 125 * @param <V> the type of mapped values 126 * 127 * @author Doug Lea 128 * @author Josh Bloch 129 * @author Arthur van Hoff 130 * @author Neal Gafter 131 * @see Object#hashCode() 132 * @see Collection 133 * @see Map 134 * @see TreeMap 135 * @see Hashtable 136 * @since 1.2 137 */ 138 public class HashMap<K,V> extends AbstractMap<K,V> 139 implements Map<K,V>, Cloneable, Serializable { 140 141 @java.io.Serial 142 private static final long serialVersionUID = 362498820763181265L; 143 144 /* 145 * Implementation notes. 146 * 147 * This map usually acts as a binned (bucketed) hash table, but 148 * when bins get too large, they are transformed into bins of 149 * TreeNodes, each structured similarly to those in 150 * java.util.TreeMap. Most methods try to use normal bins, but 151 * relay to TreeNode methods when applicable (simply by checking 152 * instanceof a node). Bins of TreeNodes may be traversed and 153 * used like any others, but additionally support faster lookup 154 * when overpopulated. However, since the vast majority of bins in 155 * normal use are not overpopulated, checking for existence of 156 * tree bins may be delayed in the course of table methods. 157 * 158 * Tree bins (i.e., bins whose elements are all TreeNodes) are 159 * ordered primarily by hashCode, but in the case of ties, if two 160 * elements are of the same "class C implements Comparable<C>", 161 * type then their compareTo method is used for ordering. (We 162 * conservatively check generic types via reflection to validate 163 * this -- see method comparableClassFor). The added complexity 164 * of tree bins is worthwhile in providing worst-case O(log n) 165 * operations when keys either have distinct hashes or are 166 * orderable, Thus, performance degrades gracefully under 167 * accidental or malicious usages in which hashCode() methods 168 * return values that are poorly distributed, as well as those in 169 * which many keys share a hashCode, so long as they are also 170 * Comparable. (If neither of these apply, we may waste about a 171 * factor of two in time and space compared to taking no 172 * precautions. But the only known cases stem from poor user 173 * programming practices that are already so slow that this makes 174 * little difference.) 175 * 176 * Because TreeNodes are about twice the size of regular nodes, we 177 * use them only when bins contain enough nodes to warrant use 178 * (see TREEIFY_THRESHOLD). And when they become too small (due to 179 * removal or resizing) they are converted back to plain bins. In 180 * usages with well-distributed user hashCodes, tree bins are 181 * rarely used. Ideally, under random hashCodes, the frequency of 182 * nodes in bins follows a Poisson distribution 183 * (http://en.wikipedia.org/wiki/Poisson_distribution) with a 184 * parameter of about 0.5 on average for the default resizing 185 * threshold of 0.75, although with a large variance because of 186 * resizing granularity. Ignoring variance, the expected 187 * occurrences of list size k are (exp(-0.5) * pow(0.5, k) / 188 * factorial(k)). The first values are: 189 * 190 * 0: 0.60653066 191 * 1: 0.30326533 192 * 2: 0.07581633 193 * 3: 0.01263606 194 * 4: 0.00157952 195 * 5: 0.00015795 196 * 6: 0.00001316 197 * 7: 0.00000094 198 * 8: 0.00000006 199 * more: less than 1 in ten million 200 * 201 * The root of a tree bin is normally its first node. However, 202 * sometimes (currently only upon Iterator.remove), the root might 203 * be elsewhere, but can be recovered following parent links 204 * (method TreeNode.root()). 205 * 206 * All applicable internal methods accept a hash code as an 207 * argument (as normally supplied from a public method), allowing 208 * them to call each other without recomputing user hashCodes. 209 * Most internal methods also accept a "tab" argument, that is 210 * normally the current table, but may be a new or old one when 211 * resizing or converting. 212 * 213 * When bin lists are treeified, split, or untreeified, we keep 214 * them in the same relative access/traversal order (i.e., field 215 * Node.next) to better preserve locality, and to slightly 216 * simplify handling of splits and traversals that invoke 217 * iterator.remove. When using comparators on insertion, to keep a 218 * total ordering (or as close as is required here) across 219 * rebalancings, we compare classes and identityHashCodes as 220 * tie-breakers. 221 * 222 * The use and transitions among plain vs tree modes is 223 * complicated by the existence of subclass LinkedHashMap. See 224 * below for hook methods defined to be invoked upon insertion, 225 * removal and access that allow LinkedHashMap internals to 226 * otherwise remain independent of these mechanics. (This also 227 * requires that a map instance be passed to some utility methods 228 * that may create new nodes.) 229 * 230 * The concurrent-programming-like SSA-based coding style helps 231 * avoid aliasing errors amid all of the twisty pointer operations. 232 */ 233 234 /** 235 * The default initial capacity - MUST be a power of two. 236 */ 237 static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 238 239 /** 240 * The maximum capacity, used if a higher value is implicitly specified 241 * by either of the constructors with arguments. 242 * MUST be a power of two <= 1<<30. 243 */ 244 static final int MAXIMUM_CAPACITY = 1 << 30; 245 246 /** 247 * The load factor used when none specified in constructor. 248 */ 249 static final float DEFAULT_LOAD_FACTOR = 0.75f; 250 251 /** 252 * The bin count threshold for using a tree rather than list for a 253 * bin. Bins are converted to trees when adding an element to a 254 * bin with at least this many nodes. The value must be greater 255 * than 2 and should be at least 8 to mesh with assumptions in 256 * tree removal about conversion back to plain bins upon 257 * shrinkage. 258 */ 259 static final int TREEIFY_THRESHOLD = 8; 260 261 /** 262 * The bin count threshold for untreeifying a (split) bin during a 263 * resize operation. Should be less than TREEIFY_THRESHOLD, and at 264 * most 6 to mesh with shrinkage detection under removal. 265 */ 266 static final int UNTREEIFY_THRESHOLD = 6; 267 268 /** 269 * The smallest table capacity for which bins may be treeified. 270 * (Otherwise the table is resized if too many nodes in a bin.) 271 * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts 272 * between resizing and treeification thresholds. 273 */ 274 static final int MIN_TREEIFY_CAPACITY = 64; 275 276 /** 277 * Basic hash bin node, used for most entries. (See below for 278 * TreeNode subclass, and in LinkedHashMap for its Entry subclass.) 279 */ 280 static class Node<K,V> implements Map.Entry<K,V> { 281 final int hash; 282 final K key; 283 V value; 284 Node<K,V> next; 285 Node(int hash, K key, V value, Node<K,V> next)286 Node(int hash, K key, V value, Node<K,V> next) { 287 this.hash = hash; 288 this.key = key; 289 this.value = value; 290 this.next = next; 291 } 292 getKey()293 public final K getKey() { return key; } getValue()294 public final V getValue() { return value; } toString()295 public final String toString() { return key + "=" + value; } 296 hashCode()297 public final int hashCode() { 298 return Objects.hashCode(key) ^ Objects.hashCode(value); 299 } 300 setValue(V newValue)301 public final V setValue(V newValue) { 302 V oldValue = value; 303 value = newValue; 304 return oldValue; 305 } 306 equals(Object o)307 public final boolean equals(Object o) { 308 if (o == this) 309 return true; 310 311 return o instanceof Map.Entry<?, ?> e 312 && Objects.equals(key, e.getKey()) 313 && Objects.equals(value, e.getValue()); 314 } 315 } 316 317 /* ---------------- Static utilities -------------- */ 318 319 /** 320 * Computes key.hashCode() and spreads (XORs) higher bits of hash 321 * to lower. Because the table uses power-of-two masking, sets of 322 * hashes that vary only in bits above the current mask will 323 * always collide. (Among known examples are sets of Float keys 324 * holding consecutive whole numbers in small tables.) So we 325 * apply a transform that spreads the impact of higher bits 326 * downward. There is a tradeoff between speed, utility, and 327 * quality of bit-spreading. Because many common sets of hashes 328 * are already reasonably distributed (so don't benefit from 329 * spreading), and because we use trees to handle large sets of 330 * collisions in bins, we just XOR some shifted bits in the 331 * cheapest possible way to reduce systematic lossage, as well as 332 * to incorporate impact of the highest bits that would otherwise 333 * never be used in index calculations because of table bounds. 334 */ hash(Object key)335 static final int hash(Object key) { 336 int h; 337 return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); 338 } 339 340 /** 341 * Returns x's Class if it is of the form "class C implements 342 * Comparable<C>", else null. 343 */ comparableClassFor(Object x)344 static Class<?> comparableClassFor(Object x) { 345 if (x instanceof Comparable) { 346 Class<?> c; Type[] ts, as; ParameterizedType p; 347 if ((c = x.getClass()) == String.class) // bypass checks 348 return c; 349 if ((ts = c.getGenericInterfaces()) != null) { 350 for (Type t : ts) { 351 if ((t instanceof ParameterizedType) && 352 ((p = (ParameterizedType) t).getRawType() == 353 Comparable.class) && 354 (as = p.getActualTypeArguments()) != null && 355 as.length == 1 && as[0] == c) // type arg is c 356 return c; 357 } 358 } 359 } 360 return null; 361 } 362 363 /** 364 * Returns k.compareTo(x) if x matches kc (k's screened comparable 365 * class), else 0. 366 */ 367 @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable compareComparables(Class<?> kc, Object k, Object x)368 static int compareComparables(Class<?> kc, Object k, Object x) { 369 return (x == null || x.getClass() != kc ? 0 : 370 ((Comparable)k).compareTo(x)); 371 } 372 373 /** 374 * Returns a power of two size for the given target capacity. 375 */ tableSizeFor(int cap)376 static final int tableSizeFor(int cap) { 377 int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1); 378 return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; 379 } 380 381 /* ---------------- Fields -------------- */ 382 383 /** 384 * The table, initialized on first use, and resized as 385 * necessary. When allocated, length is always a power of two. 386 * (We also tolerate length zero in some operations to allow 387 * bootstrapping mechanics that are currently not needed.) 388 */ 389 transient Node<K,V>[] table; 390 391 /** 392 * Holds cached entrySet(). Note that AbstractMap fields are used 393 * for keySet() and values(). 394 */ 395 transient Set<Map.Entry<K,V>> entrySet; 396 397 /** 398 * The number of key-value mappings contained in this map. 399 */ 400 transient int size; 401 402 /** 403 * The number of times this HashMap has been structurally modified 404 * Structural modifications are those that change the number of mappings in 405 * the HashMap or otherwise modify its internal structure (e.g., 406 * rehash). This field is used to make iterators on Collection-views of 407 * the HashMap fail-fast. (See ConcurrentModificationException). 408 */ 409 transient int modCount; 410 411 /** 412 * The next size value at which to resize (capacity * load factor). 413 * 414 * @serial 415 */ 416 // (The javadoc description is true upon serialization. 417 // Additionally, if the table array has not been allocated, this 418 // field holds the initial array capacity, or zero signifying 419 // DEFAULT_INITIAL_CAPACITY.) 420 int threshold; 421 422 /** 423 * The load factor for the hash table. 424 * 425 * @serial 426 */ 427 final float loadFactor; 428 429 /* ---------------- Public operations -------------- */ 430 431 /** 432 * Constructs an empty {@code HashMap} with the specified initial 433 * capacity and load factor. 434 * 435 * @apiNote 436 * To create a {@code HashMap} with an initial capacity that accommodates 437 * an expected number of mappings, use {@link #newHashMap(int) newHashMap}. 438 * 439 * @param initialCapacity the initial capacity 440 * @param loadFactor the load factor 441 * @throws IllegalArgumentException if the initial capacity is negative 442 * or the load factor is nonpositive 443 */ HashMap(int initialCapacity, float loadFactor)444 public HashMap(int initialCapacity, float loadFactor) { 445 if (initialCapacity < 0) 446 throw new IllegalArgumentException("Illegal initial capacity: " + 447 initialCapacity); 448 if (initialCapacity > MAXIMUM_CAPACITY) 449 initialCapacity = MAXIMUM_CAPACITY; 450 if (loadFactor <= 0 || Float.isNaN(loadFactor)) 451 throw new IllegalArgumentException("Illegal load factor: " + 452 loadFactor); 453 this.loadFactor = loadFactor; 454 this.threshold = tableSizeFor(initialCapacity); 455 } 456 457 /** 458 * Constructs an empty {@code HashMap} with the specified initial 459 * capacity and the default load factor (0.75). 460 * 461 * @apiNote 462 * To create a {@code HashMap} with an initial capacity that accommodates 463 * an expected number of mappings, use {@link #newHashMap(int) newHashMap}. 464 * 465 * @param initialCapacity the initial capacity. 466 * @throws IllegalArgumentException if the initial capacity is negative. 467 */ HashMap(int initialCapacity)468 public HashMap(int initialCapacity) { 469 this(initialCapacity, DEFAULT_LOAD_FACTOR); 470 } 471 472 /** 473 * Constructs an empty {@code HashMap} with the default initial capacity 474 * (16) and the default load factor (0.75). 475 */ HashMap()476 public HashMap() { 477 this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted 478 } 479 480 /** 481 * Constructs a new {@code HashMap} with the same mappings as the 482 * specified {@code Map}. The {@code HashMap} is created with 483 * default load factor (0.75) and an initial capacity sufficient to 484 * hold the mappings in the specified {@code Map}. 485 * 486 * @param m the map whose mappings are to be placed in this map 487 * @throws NullPointerException if the specified map is null 488 */ HashMap(Map<? extends K, ? extends V> m)489 public HashMap(Map<? extends K, ? extends V> m) { 490 this.loadFactor = DEFAULT_LOAD_FACTOR; 491 putMapEntries(m, false); 492 } 493 494 /** 495 * Implements Map.putAll and Map constructor. 496 * 497 * @param m the map 498 * @param evict false when initially constructing this map, else 499 * true (relayed to method afterNodeInsertion). 500 */ putMapEntries(Map<? extends K, ? extends V> m, boolean evict)501 final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) { 502 int s = m.size(); 503 if (s > 0) { 504 if (table == null) { // pre-size 505 double dt = Math.ceil(s / (double)loadFactor); 506 int t = ((dt < (double)MAXIMUM_CAPACITY) ? 507 (int)dt : MAXIMUM_CAPACITY); 508 if (t > threshold) 509 threshold = tableSizeFor(t); 510 } else { 511 // Because of linked-list bucket constraints, we cannot 512 // expand all at once, but can reduce total resize 513 // effort by repeated doubling now vs later 514 while (s > threshold && table.length < MAXIMUM_CAPACITY) 515 resize(); 516 } 517 518 for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) { 519 K key = e.getKey(); 520 V value = e.getValue(); 521 putVal(hash(key), key, value, false, evict); 522 } 523 } 524 } 525 526 /** 527 * Returns the number of key-value mappings in this map. 528 * 529 * @return the number of key-value mappings in this map 530 */ size()531 public int size() { 532 return size; 533 } 534 535 /** 536 * Returns {@code true} if this map contains no key-value mappings. 537 * 538 * @return {@code true} if this map contains no key-value mappings 539 */ isEmpty()540 public boolean isEmpty() { 541 return size == 0; 542 } 543 544 /** 545 * Returns the value to which the specified key is mapped, 546 * or {@code null} if this map contains no mapping for the key. 547 * 548 * <p>More formally, if this map contains a mapping from a key 549 * {@code k} to a value {@code v} such that {@code (key==null ? k==null : 550 * key.equals(k))}, then this method returns {@code v}; otherwise 551 * it returns {@code null}. (There can be at most one such mapping.) 552 * 553 * <p>A return value of {@code null} does not <i>necessarily</i> 554 * indicate that the map contains no mapping for the key; it's also 555 * possible that the map explicitly maps the key to {@code null}. 556 * The {@link #containsKey containsKey} operation may be used to 557 * distinguish these two cases. 558 * 559 * @see #put(Object, Object) 560 */ get(Object key)561 public V get(Object key) { 562 Node<K,V> e; 563 return (e = getNode(key)) == null ? null : e.value; 564 } 565 566 /** 567 * Implements Map.get and related methods. 568 * 569 * @param key the key 570 * @return the node, or null if none 571 */ getNode(Object key)572 final Node<K,V> getNode(Object key) { 573 Node<K,V>[] tab; Node<K,V> first, e; int n, hash; K k; 574 if ((tab = table) != null && (n = tab.length) > 0 && 575 (first = tab[(n - 1) & (hash = hash(key))]) != null) { 576 if (first.hash == hash && // always check first node 577 ((k = first.key) == key || (key != null && key.equals(k)))) 578 return first; 579 if ((e = first.next) != null) { 580 if (first instanceof TreeNode) 581 return ((TreeNode<K,V>)first).getTreeNode(hash, key); 582 do { 583 if (e.hash == hash && 584 ((k = e.key) == key || (key != null && key.equals(k)))) 585 return e; 586 } while ((e = e.next) != null); 587 } 588 } 589 return null; 590 } 591 592 /** 593 * Returns {@code true} if this map contains a mapping for the 594 * specified key. 595 * 596 * @param key The key whose presence in this map is to be tested 597 * @return {@code true} if this map contains a mapping for the specified 598 * key. 599 */ containsKey(Object key)600 public boolean containsKey(Object key) { 601 return getNode(key) != null; 602 } 603 604 /** 605 * Associates the specified value with the specified key in this map. 606 * If the map previously contained a mapping for the key, the old 607 * value is replaced. 608 * 609 * @param key key with which the specified value is to be associated 610 * @param value value to be associated with the specified key 611 * @return the previous value associated with {@code key}, or 612 * {@code null} if there was no mapping for {@code key}. 613 * (A {@code null} return can also indicate that the map 614 * previously associated {@code null} with {@code key}.) 615 */ put(K key, V value)616 public V put(K key, V value) { 617 return putVal(hash(key), key, value, false, true); 618 } 619 620 /** 621 * Implements Map.put and related methods. 622 * 623 * @param hash hash for key 624 * @param key the key 625 * @param value the value to put 626 * @param onlyIfAbsent if true, don't change existing value 627 * @param evict if false, the table is in creation mode. 628 * @return previous value, or null if none 629 */ putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict)630 final V putVal(int hash, K key, V value, boolean onlyIfAbsent, 631 boolean evict) { 632 Node<K,V>[] tab; Node<K,V> p; int n, i; 633 if ((tab = table) == null || (n = tab.length) == 0) 634 n = (tab = resize()).length; 635 if ((p = tab[i = (n - 1) & hash]) == null) 636 tab[i] = newNode(hash, key, value, null); 637 else { 638 Node<K,V> e; K k; 639 if (p.hash == hash && 640 ((k = p.key) == key || (key != null && key.equals(k)))) 641 e = p; 642 else if (p instanceof TreeNode) 643 e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); 644 else { 645 for (int binCount = 0; ; ++binCount) { 646 if ((e = p.next) == null) { 647 p.next = newNode(hash, key, value, null); 648 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st 649 treeifyBin(tab, hash); 650 break; 651 } 652 if (e.hash == hash && 653 ((k = e.key) == key || (key != null && key.equals(k)))) 654 break; 655 p = e; 656 } 657 } 658 if (e != null) { // existing mapping for key 659 V oldValue = e.value; 660 if (!onlyIfAbsent || oldValue == null) 661 e.value = value; 662 afterNodeAccess(e); 663 return oldValue; 664 } 665 } 666 ++modCount; 667 if (++size > threshold) 668 resize(); 669 afterNodeInsertion(evict); 670 return null; 671 } 672 673 /** 674 * Initializes or doubles table size. If null, allocates in 675 * accord with initial capacity target held in field threshold. 676 * Otherwise, because we are using power-of-two expansion, the 677 * elements from each bin must either stay at same index, or move 678 * with a power of two offset in the new table. 679 * 680 * @return the table 681 */ resize()682 final Node<K,V>[] resize() { 683 Node<K,V>[] oldTab = table; 684 int oldCap = (oldTab == null) ? 0 : oldTab.length; 685 int oldThr = threshold; 686 int newCap, newThr = 0; 687 if (oldCap > 0) { 688 if (oldCap >= MAXIMUM_CAPACITY) { 689 threshold = Integer.MAX_VALUE; 690 return oldTab; 691 } 692 else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && 693 oldCap >= DEFAULT_INITIAL_CAPACITY) 694 newThr = oldThr << 1; // double threshold 695 } 696 else if (oldThr > 0) // initial capacity was placed in threshold 697 newCap = oldThr; 698 else { // zero initial threshold signifies using defaults 699 newCap = DEFAULT_INITIAL_CAPACITY; 700 newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); 701 } 702 if (newThr == 0) { 703 float ft = (float)newCap * loadFactor; 704 newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? 705 (int)ft : Integer.MAX_VALUE); 706 } 707 threshold = newThr; 708 @SuppressWarnings({"rawtypes","unchecked"}) 709 Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; 710 table = newTab; 711 if (oldTab != null) { 712 for (int j = 0; j < oldCap; ++j) { 713 Node<K,V> e; 714 if ((e = oldTab[j]) != null) { 715 oldTab[j] = null; 716 if (e.next == null) 717 newTab[e.hash & (newCap - 1)] = e; 718 else if (e instanceof TreeNode) 719 ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); 720 else { // preserve order 721 Node<K,V> loHead = null, loTail = null; 722 Node<K,V> hiHead = null, hiTail = null; 723 Node<K,V> next; 724 do { 725 next = e.next; 726 if ((e.hash & oldCap) == 0) { 727 if (loTail == null) 728 loHead = e; 729 else 730 loTail.next = e; 731 loTail = e; 732 } 733 else { 734 if (hiTail == null) 735 hiHead = e; 736 else 737 hiTail.next = e; 738 hiTail = e; 739 } 740 } while ((e = next) != null); 741 if (loTail != null) { 742 loTail.next = null; 743 newTab[j] = loHead; 744 } 745 if (hiTail != null) { 746 hiTail.next = null; 747 newTab[j + oldCap] = hiHead; 748 } 749 } 750 } 751 } 752 } 753 return newTab; 754 } 755 756 /** 757 * Replaces all linked nodes in bin at index for given hash unless 758 * table is too small, in which case resizes instead. 759 */ treeifyBin(Node<K,V>[] tab, int hash)760 final void treeifyBin(Node<K,V>[] tab, int hash) { 761 int n, index; Node<K,V> e; 762 if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) 763 resize(); 764 else if ((e = tab[index = (n - 1) & hash]) != null) { 765 TreeNode<K,V> hd = null, tl = null; 766 do { 767 TreeNode<K,V> p = replacementTreeNode(e, null); 768 if (tl == null) 769 hd = p; 770 else { 771 p.prev = tl; 772 tl.next = p; 773 } 774 tl = p; 775 } while ((e = e.next) != null); 776 if ((tab[index] = hd) != null) 777 hd.treeify(tab); 778 } 779 } 780 781 /** 782 * Copies all of the mappings from the specified map to this map. 783 * These mappings will replace any mappings that this map had for 784 * any of the keys currently in the specified map. 785 * 786 * @param m mappings to be stored in this map 787 * @throws NullPointerException if the specified map is null 788 */ putAll(Map<? extends K, ? extends V> m)789 public void putAll(Map<? extends K, ? extends V> m) { 790 putMapEntries(m, true); 791 } 792 793 /** 794 * Removes the mapping for the specified key from this map if present. 795 * 796 * @param key key whose mapping is to be removed from the map 797 * @return the previous value associated with {@code key}, or 798 * {@code null} if there was no mapping for {@code key}. 799 * (A {@code null} return can also indicate that the map 800 * previously associated {@code null} with {@code key}.) 801 */ remove(Object key)802 public V remove(Object key) { 803 Node<K,V> e; 804 return (e = removeNode(hash(key), key, null, false, true)) == null ? 805 null : e.value; 806 } 807 808 /** 809 * Implements Map.remove and related methods. 810 * 811 * @param hash hash for key 812 * @param key the key 813 * @param value the value to match if matchValue, else ignored 814 * @param matchValue if true only remove if value is equal 815 * @param movable if false do not move other nodes while removing 816 * @return the node, or null if none 817 */ removeNode(int hash, Object key, Object value, boolean matchValue, boolean movable)818 final Node<K,V> removeNode(int hash, Object key, Object value, 819 boolean matchValue, boolean movable) { 820 Node<K,V>[] tab; Node<K,V> p; int n, index; 821 if ((tab = table) != null && (n = tab.length) > 0 && 822 (p = tab[index = (n - 1) & hash]) != null) { 823 Node<K,V> node = null, e; K k; V v; 824 if (p.hash == hash && 825 ((k = p.key) == key || (key != null && key.equals(k)))) 826 node = p; 827 else if ((e = p.next) != null) { 828 if (p instanceof TreeNode) 829 node = ((TreeNode<K,V>)p).getTreeNode(hash, key); 830 else { 831 do { 832 if (e.hash == hash && 833 ((k = e.key) == key || 834 (key != null && key.equals(k)))) { 835 node = e; 836 break; 837 } 838 p = e; 839 } while ((e = e.next) != null); 840 } 841 } 842 if (node != null && (!matchValue || (v = node.value) == value || 843 (value != null && value.equals(v)))) { 844 if (node instanceof TreeNode) 845 ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable); 846 else if (node == p) 847 tab[index] = node.next; 848 else 849 p.next = node.next; 850 ++modCount; 851 --size; 852 afterNodeRemoval(node); 853 return node; 854 } 855 } 856 return null; 857 } 858 859 /** 860 * Removes all of the mappings from this map. 861 * The map will be empty after this call returns. 862 */ clear()863 public void clear() { 864 Node<K,V>[] tab; 865 modCount++; 866 if ((tab = table) != null && size > 0) { 867 size = 0; 868 for (int i = 0; i < tab.length; ++i) 869 tab[i] = null; 870 } 871 } 872 873 /** 874 * Returns {@code true} if this map maps one or more keys to the 875 * specified value. 876 * 877 * @param value value whose presence in this map is to be tested 878 * @return {@code true} if this map maps one or more keys to the 879 * specified value 880 */ containsValue(Object value)881 public boolean containsValue(Object value) { 882 Node<K,V>[] tab; V v; 883 if ((tab = table) != null && size > 0) { 884 for (Node<K,V> e : tab) { 885 for (; e != null; e = e.next) { 886 if ((v = e.value) == value || 887 (value != null && value.equals(v))) 888 return true; 889 } 890 } 891 } 892 return false; 893 } 894 895 /** 896 * Returns a {@link Set} view of the keys contained in this map. 897 * The set is backed by the map, so changes to the map are 898 * reflected in the set, and vice-versa. If the map is modified 899 * while an iteration over the set is in progress (except through 900 * the iterator's own {@code remove} operation), the results of 901 * the iteration are undefined. The set supports element removal, 902 * which removes the corresponding mapping from the map, via the 903 * {@code Iterator.remove}, {@code Set.remove}, 904 * {@code removeAll}, {@code retainAll}, and {@code clear} 905 * operations. It does not support the {@code add} or {@code addAll} 906 * operations. 907 * 908 * @return a set view of the keys contained in this map 909 */ keySet()910 public Set<K> keySet() { 911 Set<K> ks = keySet; 912 if (ks == null) { 913 ks = new KeySet(); 914 keySet = ks; 915 } 916 return ks; 917 } 918 919 /** 920 * Prepares the array for {@link Collection#toArray(Object[])} implementation. 921 * If supplied array is smaller than this map size, a new array is allocated. 922 * If supplied array is bigger than this map size, a null is written at size index. 923 * 924 * @param a an original array passed to {@code toArray()} method 925 * @param <T> type of array elements 926 * @return an array ready to be filled and returned from {@code toArray()} method. 927 */ 928 @SuppressWarnings("unchecked") prepareArray(T[] a)929 final <T> T[] prepareArray(T[] a) { 930 int size = this.size; 931 if (a.length < size) { 932 return (T[]) java.lang.reflect.Array 933 .newInstance(a.getClass().getComponentType(), size); 934 } 935 if (a.length > size) { 936 a[size] = null; 937 } 938 return a; 939 } 940 941 /** 942 * Fills an array with this map keys and returns it. This method assumes 943 * that input array is big enough to fit all the keys. Use 944 * {@link #prepareArray(Object[])} to ensure this. 945 * 946 * @param a an array to fill 947 * @param <T> type of array elements 948 * @return supplied array 949 */ keysToArray(T[] a)950 <T> T[] keysToArray(T[] a) { 951 Object[] r = a; 952 Node<K,V>[] tab; 953 int idx = 0; 954 if (size > 0 && (tab = table) != null) { 955 for (Node<K,V> e : tab) { 956 for (; e != null; e = e.next) { 957 r[idx++] = e.key; 958 } 959 } 960 } 961 return a; 962 } 963 964 /** 965 * Fills an array with this map values and returns it. This method assumes 966 * that input array is big enough to fit all the values. Use 967 * {@link #prepareArray(Object[])} to ensure this. 968 * 969 * @param a an array to fill 970 * @param <T> type of array elements 971 * @return supplied array 972 */ valuesToArray(T[] a)973 <T> T[] valuesToArray(T[] a) { 974 Object[] r = a; 975 Node<K,V>[] tab; 976 int idx = 0; 977 if (size > 0 && (tab = table) != null) { 978 for (Node<K,V> e : tab) { 979 for (; e != null; e = e.next) { 980 r[idx++] = e.value; 981 } 982 } 983 } 984 return a; 985 } 986 987 final class KeySet extends AbstractSet<K> { size()988 public final int size() { return size; } clear()989 public final void clear() { HashMap.this.clear(); } iterator()990 public final Iterator<K> iterator() { return new KeyIterator(); } contains(Object o)991 public final boolean contains(Object o) { return containsKey(o); } remove(Object key)992 public final boolean remove(Object key) { 993 return removeNode(hash(key), key, null, false, true) != null; 994 } spliterator()995 public final Spliterator<K> spliterator() { 996 return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0); 997 } 998 toArray()999 public Object[] toArray() { 1000 return keysToArray(new Object[size]); 1001 } 1002 toArray(T[] a)1003 public <T> T[] toArray(T[] a) { 1004 return keysToArray(prepareArray(a)); 1005 } 1006 forEach(Consumer<? super K> action)1007 public final void forEach(Consumer<? super K> action) { 1008 Node<K,V>[] tab; 1009 if (action == null) 1010 throw new NullPointerException(); 1011 if (size > 0 && (tab = table) != null) { 1012 int mc = modCount; 1013 // Android-changed: Detect changes to modCount early. 1014 for (int i = 0; (i < tab.length && modCount == mc); ++i) { 1015 for (Node<K,V> e = tab[i]; e != null; e = e.next) 1016 action.accept(e.key); 1017 } 1018 if (modCount != mc) 1019 throw new ConcurrentModificationException(); 1020 } 1021 } 1022 } 1023 1024 /** 1025 * Returns a {@link Collection} view of the values contained in this map. 1026 * The collection is backed by the map, so changes to the map are 1027 * reflected in the collection, and vice-versa. If the map is 1028 * modified while an iteration over the collection is in progress 1029 * (except through the iterator's own {@code remove} operation), 1030 * the results of the iteration are undefined. The collection 1031 * supports element removal, which removes the corresponding 1032 * mapping from the map, via the {@code Iterator.remove}, 1033 * {@code Collection.remove}, {@code removeAll}, 1034 * {@code retainAll} and {@code clear} operations. It does not 1035 * support the {@code add} or {@code addAll} operations. 1036 * 1037 * @return a view of the values contained in this map 1038 */ values()1039 public Collection<V> values() { 1040 Collection<V> vs = values; 1041 if (vs == null) { 1042 vs = new Values(); 1043 values = vs; 1044 } 1045 return vs; 1046 } 1047 1048 final class Values extends AbstractCollection<V> { size()1049 public final int size() { return size; } clear()1050 public final void clear() { HashMap.this.clear(); } iterator()1051 public final Iterator<V> iterator() { return new ValueIterator(); } contains(Object o)1052 public final boolean contains(Object o) { return containsValue(o); } spliterator()1053 public final Spliterator<V> spliterator() { 1054 return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0); 1055 } 1056 toArray()1057 public Object[] toArray() { 1058 return valuesToArray(new Object[size]); 1059 } 1060 toArray(T[] a)1061 public <T> T[] toArray(T[] a) { 1062 return valuesToArray(prepareArray(a)); 1063 } 1064 forEach(Consumer<? super V> action)1065 public final void forEach(Consumer<? super V> action) { 1066 Node<K,V>[] tab; 1067 if (action == null) 1068 throw new NullPointerException(); 1069 if (size > 0 && (tab = table) != null) { 1070 int mc = modCount; 1071 // Android-changed: Detect changes to modCount early. 1072 for (int i = 0; (i < tab.length && modCount == mc); ++i) { 1073 for (Node<K,V> e = tab[i]; e != null; e = e.next) 1074 action.accept(e.value); 1075 } 1076 if (modCount != mc) 1077 throw new ConcurrentModificationException(); 1078 } 1079 } 1080 } 1081 1082 /** 1083 * Returns a {@link Set} view of the mappings contained in this map. 1084 * The set is backed by the map, so changes to the map are 1085 * reflected in the set, and vice-versa. If the map is modified 1086 * while an iteration over the set is in progress (except through 1087 * the iterator's own {@code remove} operation, or through the 1088 * {@code setValue} operation on a map entry returned by the 1089 * iterator) the results of the iteration are undefined. The set 1090 * supports element removal, which removes the corresponding 1091 * mapping from the map, via the {@code Iterator.remove}, 1092 * {@code Set.remove}, {@code removeAll}, {@code retainAll} and 1093 * {@code clear} operations. It does not support the 1094 * {@code add} or {@code addAll} operations. 1095 * 1096 * @return a set view of the mappings contained in this map 1097 */ entrySet()1098 public Set<Map.Entry<K,V>> entrySet() { 1099 Set<Map.Entry<K,V>> es; 1100 return (es = entrySet) == null ? (entrySet = new EntrySet()) : es; 1101 } 1102 1103 final class EntrySet extends AbstractSet<Map.Entry<K,V>> { size()1104 public final int size() { return size; } clear()1105 public final void clear() { HashMap.this.clear(); } iterator()1106 public final Iterator<Map.Entry<K,V>> iterator() { 1107 return new EntryIterator(); 1108 } contains(Object o)1109 public final boolean contains(Object o) { 1110 if (!(o instanceof Map.Entry<?, ?> e)) 1111 return false; 1112 Object key = e.getKey(); 1113 Node<K,V> candidate = getNode(key); 1114 return candidate != null && candidate.equals(e); 1115 } remove(Object o)1116 public final boolean remove(Object o) { 1117 if (o instanceof Map.Entry<?, ?> e) { 1118 Object key = e.getKey(); 1119 Object value = e.getValue(); 1120 return removeNode(hash(key), key, value, true, true) != null; 1121 } 1122 return false; 1123 } spliterator()1124 public final Spliterator<Map.Entry<K,V>> spliterator() { 1125 return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0); 1126 } forEach(Consumer<? super Map.Entry<K,V>> action)1127 public final void forEach(Consumer<? super Map.Entry<K,V>> action) { 1128 Node<K,V>[] tab; 1129 if (action == null) 1130 throw new NullPointerException(); 1131 if (size > 0 && (tab = table) != null) { 1132 int mc = modCount; 1133 // Android-changed: Detect changes to modCount early. 1134 for (int i = 0; (i < tab.length && modCount == mc); ++i) { 1135 for (Node<K,V> e = tab[i]; e != null; e = e.next) 1136 action.accept(e); 1137 } 1138 if (modCount != mc) 1139 throw new ConcurrentModificationException(); 1140 } 1141 } 1142 } 1143 1144 // Overrides of JDK8 Map extension methods 1145 1146 @Override getOrDefault(Object key, V defaultValue)1147 public V getOrDefault(Object key, V defaultValue) { 1148 Node<K,V> e; 1149 return (e = getNode(key)) == null ? defaultValue : e.value; 1150 } 1151 1152 @Override putIfAbsent(K key, V value)1153 public V putIfAbsent(K key, V value) { 1154 return putVal(hash(key), key, value, true, true); 1155 } 1156 1157 @Override remove(Object key, Object value)1158 public boolean remove(Object key, Object value) { 1159 return removeNode(hash(key), key, value, true, true) != null; 1160 } 1161 1162 @Override replace(K key, V oldValue, V newValue)1163 public boolean replace(K key, V oldValue, V newValue) { 1164 Node<K,V> e; V v; 1165 if ((e = getNode(key)) != null && 1166 ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) { 1167 e.value = newValue; 1168 afterNodeAccess(e); 1169 return true; 1170 } 1171 return false; 1172 } 1173 1174 @Override replace(K key, V value)1175 public V replace(K key, V value) { 1176 Node<K,V> e; 1177 if ((e = getNode(key)) != null) { 1178 V oldValue = e.value; 1179 e.value = value; 1180 afterNodeAccess(e); 1181 return oldValue; 1182 } 1183 return null; 1184 } 1185 1186 /** 1187 * {@inheritDoc} 1188 * 1189 * <p>This method will, on a best-effort basis, throw a 1190 * {@link ConcurrentModificationException} if it is detected that the 1191 * mapping function modifies this map during computation. 1192 * 1193 * @throws ConcurrentModificationException if it is detected that the 1194 * mapping function modified this map 1195 */ 1196 @Override computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction)1197 public V computeIfAbsent(K key, 1198 Function<? super K, ? extends V> mappingFunction) { 1199 if (mappingFunction == null) 1200 throw new NullPointerException(); 1201 int hash = hash(key); 1202 Node<K,V>[] tab; Node<K,V> first; int n, i; 1203 int binCount = 0; 1204 TreeNode<K,V> t = null; 1205 Node<K,V> old = null; 1206 if (size > threshold || (tab = table) == null || 1207 (n = tab.length) == 0) 1208 n = (tab = resize()).length; 1209 if ((first = tab[i = (n - 1) & hash]) != null) { 1210 if (first instanceof TreeNode) 1211 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 1212 else { 1213 Node<K,V> e = first; K k; 1214 do { 1215 if (e.hash == hash && 1216 ((k = e.key) == key || (key != null && key.equals(k)))) { 1217 old = e; 1218 break; 1219 } 1220 ++binCount; 1221 } while ((e = e.next) != null); 1222 } 1223 V oldValue; 1224 if (old != null && (oldValue = old.value) != null) { 1225 afterNodeAccess(old); 1226 return oldValue; 1227 } 1228 } 1229 int mc = modCount; 1230 V v = mappingFunction.apply(key); 1231 if (mc != modCount) { throw new ConcurrentModificationException(); } 1232 if (v == null) { 1233 return null; 1234 } else if (old != null) { 1235 old.value = v; 1236 afterNodeAccess(old); 1237 return v; 1238 } 1239 else if (t != null) 1240 t.putTreeVal(this, tab, hash, key, v); 1241 else { 1242 tab[i] = newNode(hash, key, v, first); 1243 if (binCount >= TREEIFY_THRESHOLD - 1) 1244 treeifyBin(tab, hash); 1245 } 1246 modCount = mc + 1; 1247 ++size; 1248 afterNodeInsertion(true); 1249 return v; 1250 } 1251 1252 /** 1253 * {@inheritDoc} 1254 * 1255 * <p>This method will, on a best-effort basis, throw a 1256 * {@link ConcurrentModificationException} if it is detected that the 1257 * remapping function modifies this map during computation. 1258 * 1259 * @throws ConcurrentModificationException if it is detected that the 1260 * remapping function modified this map 1261 */ 1262 @Override computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)1263 public V computeIfPresent(K key, 1264 BiFunction<? super K, ? super V, ? extends V> remappingFunction) { 1265 if (remappingFunction == null) 1266 throw new NullPointerException(); 1267 Node<K,V> e; V oldValue; 1268 if ((e = getNode(key)) != null && 1269 (oldValue = e.value) != null) { 1270 int mc = modCount; 1271 V v = remappingFunction.apply(key, oldValue); 1272 if (mc != modCount) { throw new ConcurrentModificationException(); } 1273 if (v != null) { 1274 e.value = v; 1275 afterNodeAccess(e); 1276 return v; 1277 } 1278 else { 1279 int hash = hash(key); 1280 removeNode(hash, key, null, false, true); 1281 } 1282 } 1283 return null; 1284 } 1285 1286 /** 1287 * {@inheritDoc} 1288 * 1289 * <p>This method will, on a best-effort basis, throw a 1290 * {@link ConcurrentModificationException} if it is detected that the 1291 * remapping function modifies this map during computation. 1292 * 1293 * @throws ConcurrentModificationException if it is detected that the 1294 * remapping function modified this map 1295 */ 1296 @Override compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)1297 public V compute(K key, 1298 BiFunction<? super K, ? super V, ? extends V> remappingFunction) { 1299 if (remappingFunction == null) 1300 throw new NullPointerException(); 1301 int hash = hash(key); 1302 Node<K,V>[] tab; Node<K,V> first; int n, i; 1303 int binCount = 0; 1304 TreeNode<K,V> t = null; 1305 Node<K,V> old = null; 1306 if (size > threshold || (tab = table) == null || 1307 (n = tab.length) == 0) 1308 n = (tab = resize()).length; 1309 if ((first = tab[i = (n - 1) & hash]) != null) { 1310 if (first instanceof TreeNode) 1311 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 1312 else { 1313 Node<K,V> e = first; K k; 1314 do { 1315 if (e.hash == hash && 1316 ((k = e.key) == key || (key != null && key.equals(k)))) { 1317 old = e; 1318 break; 1319 } 1320 ++binCount; 1321 } while ((e = e.next) != null); 1322 } 1323 } 1324 V oldValue = (old == null) ? null : old.value; 1325 int mc = modCount; 1326 V v = remappingFunction.apply(key, oldValue); 1327 if (mc != modCount) { throw new ConcurrentModificationException(); } 1328 if (old != null) { 1329 if (v != null) { 1330 old.value = v; 1331 afterNodeAccess(old); 1332 } 1333 else 1334 removeNode(hash, key, null, false, true); 1335 } 1336 else if (v != null) { 1337 if (t != null) 1338 t.putTreeVal(this, tab, hash, key, v); 1339 else { 1340 tab[i] = newNode(hash, key, v, first); 1341 if (binCount >= TREEIFY_THRESHOLD - 1) 1342 treeifyBin(tab, hash); 1343 } 1344 modCount = mc + 1; 1345 ++size; 1346 afterNodeInsertion(true); 1347 } 1348 return v; 1349 } 1350 1351 /** 1352 * {@inheritDoc} 1353 * 1354 * <p>This method will, on a best-effort basis, throw a 1355 * {@link ConcurrentModificationException} if it is detected that the 1356 * remapping function modifies this map during computation. 1357 * 1358 * @throws ConcurrentModificationException if it is detected that the 1359 * remapping function modified this map 1360 */ 1361 @Override merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction)1362 public V merge(K key, V value, 1363 BiFunction<? super V, ? super V, ? extends V> remappingFunction) { 1364 if (value == null || remappingFunction == null) 1365 throw new NullPointerException(); 1366 int hash = hash(key); 1367 Node<K,V>[] tab; Node<K,V> first; int n, i; 1368 int binCount = 0; 1369 TreeNode<K,V> t = null; 1370 Node<K,V> old = null; 1371 if (size > threshold || (tab = table) == null || 1372 (n = tab.length) == 0) 1373 n = (tab = resize()).length; 1374 if ((first = tab[i = (n - 1) & hash]) != null) { 1375 if (first instanceof TreeNode) 1376 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 1377 else { 1378 Node<K,V> e = first; K k; 1379 do { 1380 if (e.hash == hash && 1381 ((k = e.key) == key || (key != null && key.equals(k)))) { 1382 old = e; 1383 break; 1384 } 1385 ++binCount; 1386 } while ((e = e.next) != null); 1387 } 1388 } 1389 if (old != null) { 1390 V v; 1391 if (old.value != null) { 1392 int mc = modCount; 1393 v = remappingFunction.apply(old.value, value); 1394 if (mc != modCount) { 1395 throw new ConcurrentModificationException(); 1396 } 1397 } else { 1398 v = value; 1399 } 1400 if (v != null) { 1401 old.value = v; 1402 afterNodeAccess(old); 1403 } 1404 else 1405 removeNode(hash, key, null, false, true); 1406 return v; 1407 } else { 1408 if (t != null) 1409 t.putTreeVal(this, tab, hash, key, value); 1410 else { 1411 tab[i] = newNode(hash, key, value, first); 1412 if (binCount >= TREEIFY_THRESHOLD - 1) 1413 treeifyBin(tab, hash); 1414 } 1415 ++modCount; 1416 ++size; 1417 afterNodeInsertion(true); 1418 return value; 1419 } 1420 } 1421 1422 @Override forEach(BiConsumer<? super K, ? super V> action)1423 public void forEach(BiConsumer<? super K, ? super V> action) { 1424 Node<K,V>[] tab; 1425 if (action == null) 1426 throw new NullPointerException(); 1427 if (size > 0 && (tab = table) != null) { 1428 int mc = modCount; 1429 // Android-changed: Detect changes to modCount early. 1430 for (int i = 0; (i < tab.length && mc == modCount); ++i) { 1431 for (Node<K,V> e = tab[i]; e != null; e = e.next) 1432 action.accept(e.key, e.value); 1433 } 1434 if (modCount != mc) 1435 throw new ConcurrentModificationException(); 1436 } 1437 } 1438 1439 @Override replaceAll(BiFunction<? super K, ? super V, ? extends V> function)1440 public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) { 1441 Node<K,V>[] tab; 1442 if (function == null) 1443 throw new NullPointerException(); 1444 if (size > 0 && (tab = table) != null) { 1445 int mc = modCount; 1446 for (Node<K,V> e : tab) { 1447 for (; e != null; e = e.next) { 1448 e.value = function.apply(e.key, e.value); 1449 } 1450 } 1451 if (modCount != mc) 1452 throw new ConcurrentModificationException(); 1453 } 1454 } 1455 1456 /* ------------------------------------------------------------ */ 1457 // Cloning and serialization 1458 1459 /** 1460 * Returns a shallow copy of this {@code HashMap} instance: the keys and 1461 * values themselves are not cloned. 1462 * 1463 * @return a shallow copy of this map 1464 */ 1465 @SuppressWarnings("unchecked") 1466 @Override clone()1467 public Object clone() { 1468 HashMap<K,V> result; 1469 try { 1470 result = (HashMap<K,V>)super.clone(); 1471 } catch (CloneNotSupportedException e) { 1472 // this shouldn't happen, since we are Cloneable 1473 throw new InternalError(e); 1474 } 1475 result.reinitialize(); 1476 result.putMapEntries(this, false); 1477 return result; 1478 } 1479 1480 // These methods are also used when serializing HashSets loadFactor()1481 final float loadFactor() { return loadFactor; } capacity()1482 final int capacity() { 1483 return (table != null) ? table.length : 1484 (threshold > 0) ? threshold : 1485 DEFAULT_INITIAL_CAPACITY; 1486 } 1487 1488 /** 1489 * Saves this map to a stream (that is, serializes it). 1490 * 1491 * @param s the stream 1492 * @throws IOException if an I/O error occurs 1493 * @serialData The <i>capacity</i> of the HashMap (the length of the 1494 * bucket array) is emitted (int), followed by the 1495 * <i>size</i> (an int, the number of key-value 1496 * mappings), followed by the key (Object) and value (Object) 1497 * for each key-value mapping. The key-value mappings are 1498 * emitted in no particular order. 1499 */ 1500 @java.io.Serial writeObject(java.io.ObjectOutputStream s)1501 private void writeObject(java.io.ObjectOutputStream s) 1502 throws IOException { 1503 int buckets = capacity(); 1504 // Write out the threshold, loadfactor, and any hidden stuff 1505 s.defaultWriteObject(); 1506 s.writeInt(buckets); 1507 s.writeInt(size); 1508 internalWriteEntries(s); 1509 } 1510 1511 /** 1512 * Reconstitute the {@code HashMap} instance from a stream (i.e., 1513 * deserialize it). 1514 */ readObject(java.io.ObjectInputStream s)1515 private void readObject(java.io.ObjectInputStream s) 1516 throws IOException, ClassNotFoundException { 1517 1518 ObjectInputStream.GetField fields = s.readFields(); 1519 1520 // Read loadFactor (ignore threshold) 1521 float lf = fields.get("loadFactor", 0.75f); 1522 if (lf <= 0 || Float.isNaN(lf)) 1523 throw new InvalidObjectException("Illegal load factor: " + lf); 1524 1525 lf = Math.clamp(lf, 0.25f, 4.0f); 1526 HashMap.UnsafeHolder.putLoadFactor(this, lf); 1527 1528 reinitialize(); 1529 if (loadFactor <= 0 || Float.isNaN(loadFactor)) 1530 throw new InvalidObjectException("Illegal load factor: " + 1531 loadFactor); 1532 s.readInt(); // Read and ignore number of buckets 1533 int mappings = s.readInt(); // Read number of mappings (size) 1534 1535 if (mappings < 0) { 1536 throw new InvalidObjectException("Illegal mappings count: " + mappings); 1537 } else if (mappings == 0) { 1538 // use defaults 1539 } else if (mappings > 0) { 1540 double dc = Math.ceil(mappings / (double)lf); 1541 int cap = ((dc < DEFAULT_INITIAL_CAPACITY) ? 1542 DEFAULT_INITIAL_CAPACITY : 1543 (dc >= MAXIMUM_CAPACITY) ? 1544 MAXIMUM_CAPACITY : 1545 tableSizeFor((int)dc)); 1546 float ft = (float)cap * lf; 1547 threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ? 1548 (int)ft : Integer.MAX_VALUE); 1549 @SuppressWarnings({"rawtypes","unchecked"}) 1550 Node<K,V>[] tab = (Node<K,V>[])new Node[cap]; 1551 table = tab; 1552 1553 // Read the keys and values, and put the mappings in the HashMap 1554 for (int i = 0; i < mappings; i++) { 1555 @SuppressWarnings("unchecked") 1556 K key = (K) s.readObject(); 1557 @SuppressWarnings("unchecked") 1558 V value = (V) s.readObject(); 1559 putVal(hash(key), key, value, false, false); 1560 } 1561 } 1562 } 1563 1564 // Support for resetting final field during deserializing 1565 private static final class UnsafeHolder { UnsafeHolder()1566 private UnsafeHolder() { throw new InternalError(); } 1567 private static final jdk.internal.misc.Unsafe unsafe 1568 = jdk.internal.misc.Unsafe.getUnsafe(); 1569 private static final long LF_OFFSET 1570 = unsafe.objectFieldOffset(HashMap.class, "loadFactor"); putLoadFactor(HashMap<?, ?> map, float lf)1571 static void putLoadFactor(HashMap<?, ?> map, float lf) { 1572 unsafe.putFloat(map, LF_OFFSET, lf); 1573 } 1574 } 1575 1576 /* ------------------------------------------------------------ */ 1577 // iterators 1578 1579 abstract class HashIterator { 1580 Node<K,V> next; // next entry to return 1581 Node<K,V> current; // current entry 1582 int expectedModCount; // for fast-fail 1583 int index; // current slot 1584 HashIterator()1585 HashIterator() { 1586 expectedModCount = modCount; 1587 Node<K,V>[] t = table; 1588 current = next = null; 1589 index = 0; 1590 if (t != null && size > 0) { // advance to first entry 1591 do {} while (index < t.length && (next = t[index++]) == null); 1592 } 1593 } 1594 hasNext()1595 public final boolean hasNext() { 1596 return next != null; 1597 } 1598 nextNode()1599 final Node<K,V> nextNode() { 1600 Node<K,V>[] t; 1601 Node<K,V> e = next; 1602 if (modCount != expectedModCount) 1603 throw new ConcurrentModificationException(); 1604 if (e == null) 1605 throw new NoSuchElementException(); 1606 if ((next = (current = e).next) == null && (t = table) != null) { 1607 do {} while (index < t.length && (next = t[index++]) == null); 1608 } 1609 return e; 1610 } 1611 remove()1612 public final void remove() { 1613 Node<K,V> p = current; 1614 if (p == null) 1615 throw new IllegalStateException(); 1616 if (modCount != expectedModCount) 1617 throw new ConcurrentModificationException(); 1618 current = null; 1619 removeNode(p.hash, p.key, null, false, false); 1620 expectedModCount = modCount; 1621 } 1622 } 1623 1624 final class KeyIterator extends HashIterator 1625 implements Iterator<K> { next()1626 public final K next() { return nextNode().key; } 1627 } 1628 1629 final class ValueIterator extends HashIterator 1630 implements Iterator<V> { next()1631 public final V next() { return nextNode().value; } 1632 } 1633 1634 final class EntryIterator extends HashIterator 1635 implements Iterator<Map.Entry<K,V>> { next()1636 public final Map.Entry<K,V> next() { return nextNode(); } 1637 } 1638 1639 /* ------------------------------------------------------------ */ 1640 // spliterators 1641 1642 static class HashMapSpliterator<K,V> { 1643 final HashMap<K,V> map; 1644 Node<K,V> current; // current node 1645 int index; // current index, modified on advance/split 1646 int fence; // one past last index 1647 int est; // size estimate 1648 int expectedModCount; // for comodification checks 1649 HashMapSpliterator(HashMap<K,V> m, int origin, int fence, int est, int expectedModCount)1650 HashMapSpliterator(HashMap<K,V> m, int origin, 1651 int fence, int est, 1652 int expectedModCount) { 1653 this.map = m; 1654 this.index = origin; 1655 this.fence = fence; 1656 this.est = est; 1657 this.expectedModCount = expectedModCount; 1658 } 1659 getFence()1660 final int getFence() { // initialize fence and size on first use 1661 int hi; 1662 if ((hi = fence) < 0) { 1663 HashMap<K,V> m = map; 1664 est = m.size; 1665 expectedModCount = m.modCount; 1666 Node<K,V>[] tab = m.table; 1667 hi = fence = (tab == null) ? 0 : tab.length; 1668 } 1669 return hi; 1670 } 1671 estimateSize()1672 public final long estimateSize() { 1673 getFence(); // force init 1674 return (long) est; 1675 } 1676 } 1677 1678 static final class KeySpliterator<K,V> 1679 extends HashMapSpliterator<K,V> 1680 implements Spliterator<K> { KeySpliterator(HashMap<K,V> m, int origin, int fence, int est, int expectedModCount)1681 KeySpliterator(HashMap<K,V> m, int origin, int fence, int est, 1682 int expectedModCount) { 1683 super(m, origin, fence, est, expectedModCount); 1684 } 1685 trySplit()1686 public KeySpliterator<K,V> trySplit() { 1687 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1688 return (lo >= mid || current != null) ? null : 1689 new KeySpliterator<>(map, lo, index = mid, est >>>= 1, 1690 expectedModCount); 1691 } 1692 forEachRemaining(Consumer<? super K> action)1693 public void forEachRemaining(Consumer<? super K> action) { 1694 int i, hi, mc; 1695 if (action == null) 1696 throw new NullPointerException(); 1697 HashMap<K,V> m = map; 1698 Node<K,V>[] tab = m.table; 1699 if ((hi = fence) < 0) { 1700 mc = expectedModCount = m.modCount; 1701 hi = fence = (tab == null) ? 0 : tab.length; 1702 } 1703 else 1704 mc = expectedModCount; 1705 if (tab != null && tab.length >= hi && 1706 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1707 Node<K,V> p = current; 1708 current = null; 1709 do { 1710 if (p == null) 1711 p = tab[i++]; 1712 else { 1713 action.accept(p.key); 1714 p = p.next; 1715 } 1716 } while (p != null || i < hi); 1717 if (m.modCount != mc) 1718 throw new ConcurrentModificationException(); 1719 } 1720 } 1721 tryAdvance(Consumer<? super K> action)1722 public boolean tryAdvance(Consumer<? super K> action) { 1723 int hi; 1724 if (action == null) 1725 throw new NullPointerException(); 1726 Node<K,V>[] tab = map.table; 1727 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1728 while (current != null || index < hi) { 1729 if (current == null) 1730 current = tab[index++]; 1731 else { 1732 K k = current.key; 1733 current = current.next; 1734 action.accept(k); 1735 if (map.modCount != expectedModCount) 1736 throw new ConcurrentModificationException(); 1737 return true; 1738 } 1739 } 1740 } 1741 return false; 1742 } 1743 characteristics()1744 public int characteristics() { 1745 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | 1746 Spliterator.DISTINCT; 1747 } 1748 } 1749 1750 static final class ValueSpliterator<K,V> 1751 extends HashMapSpliterator<K,V> 1752 implements Spliterator<V> { ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est, int expectedModCount)1753 ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est, 1754 int expectedModCount) { 1755 super(m, origin, fence, est, expectedModCount); 1756 } 1757 trySplit()1758 public ValueSpliterator<K,V> trySplit() { 1759 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1760 return (lo >= mid || current != null) ? null : 1761 new ValueSpliterator<>(map, lo, index = mid, est >>>= 1, 1762 expectedModCount); 1763 } 1764 forEachRemaining(Consumer<? super V> action)1765 public void forEachRemaining(Consumer<? super V> action) { 1766 int i, hi, mc; 1767 if (action == null) 1768 throw new NullPointerException(); 1769 HashMap<K,V> m = map; 1770 Node<K,V>[] tab = m.table; 1771 if ((hi = fence) < 0) { 1772 mc = expectedModCount = m.modCount; 1773 hi = fence = (tab == null) ? 0 : tab.length; 1774 } 1775 else 1776 mc = expectedModCount; 1777 if (tab != null && tab.length >= hi && 1778 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1779 Node<K,V> p = current; 1780 current = null; 1781 do { 1782 if (p == null) 1783 p = tab[i++]; 1784 else { 1785 action.accept(p.value); 1786 p = p.next; 1787 } 1788 } while (p != null || i < hi); 1789 if (m.modCount != mc) 1790 throw new ConcurrentModificationException(); 1791 } 1792 } 1793 tryAdvance(Consumer<? super V> action)1794 public boolean tryAdvance(Consumer<? super V> action) { 1795 int hi; 1796 if (action == null) 1797 throw new NullPointerException(); 1798 Node<K,V>[] tab = map.table; 1799 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1800 while (current != null || index < hi) { 1801 if (current == null) 1802 current = tab[index++]; 1803 else { 1804 V v = current.value; 1805 current = current.next; 1806 action.accept(v); 1807 if (map.modCount != expectedModCount) 1808 throw new ConcurrentModificationException(); 1809 return true; 1810 } 1811 } 1812 } 1813 return false; 1814 } 1815 characteristics()1816 public int characteristics() { 1817 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0); 1818 } 1819 } 1820 1821 static final class EntrySpliterator<K,V> 1822 extends HashMapSpliterator<K,V> 1823 implements Spliterator<Map.Entry<K,V>> { EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est, int expectedModCount)1824 EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est, 1825 int expectedModCount) { 1826 super(m, origin, fence, est, expectedModCount); 1827 } 1828 trySplit()1829 public EntrySpliterator<K,V> trySplit() { 1830 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1831 return (lo >= mid || current != null) ? null : 1832 new EntrySpliterator<>(map, lo, index = mid, est >>>= 1, 1833 expectedModCount); 1834 } 1835 forEachRemaining(Consumer<? super Map.Entry<K,V>> action)1836 public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) { 1837 int i, hi, mc; 1838 if (action == null) 1839 throw new NullPointerException(); 1840 HashMap<K,V> m = map; 1841 Node<K,V>[] tab = m.table; 1842 if ((hi = fence) < 0) { 1843 mc = expectedModCount = m.modCount; 1844 hi = fence = (tab == null) ? 0 : tab.length; 1845 } 1846 else 1847 mc = expectedModCount; 1848 if (tab != null && tab.length >= hi && 1849 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1850 Node<K,V> p = current; 1851 current = null; 1852 do { 1853 if (p == null) 1854 p = tab[i++]; 1855 else { 1856 action.accept(p); 1857 p = p.next; 1858 } 1859 } while (p != null || i < hi); 1860 if (m.modCount != mc) 1861 throw new ConcurrentModificationException(); 1862 } 1863 } 1864 tryAdvance(Consumer<? super Map.Entry<K,V>> action)1865 public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) { 1866 int hi; 1867 if (action == null) 1868 throw new NullPointerException(); 1869 Node<K,V>[] tab = map.table; 1870 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1871 while (current != null || index < hi) { 1872 if (current == null) 1873 current = tab[index++]; 1874 else { 1875 Node<K,V> e = current; 1876 current = current.next; 1877 action.accept(e); 1878 if (map.modCount != expectedModCount) 1879 throw new ConcurrentModificationException(); 1880 return true; 1881 } 1882 } 1883 } 1884 return false; 1885 } 1886 characteristics()1887 public int characteristics() { 1888 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | 1889 Spliterator.DISTINCT; 1890 } 1891 } 1892 1893 /* ------------------------------------------------------------ */ 1894 // LinkedHashMap support 1895 1896 1897 /* 1898 * The following package-protected methods are designed to be 1899 * overridden by LinkedHashMap, but not by any other subclass. 1900 * Nearly all other internal methods are also package-protected 1901 * but are declared final, so can be used by LinkedHashMap, view 1902 * classes, and HashSet. 1903 */ 1904 1905 // Create a regular (non-tree) node newNode(int hash, K key, V value, Node<K,V> next)1906 Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) { 1907 return new Node<>(hash, key, value, next); 1908 } 1909 1910 // For conversion from TreeNodes to plain nodes replacementNode(Node<K,V> p, Node<K,V> next)1911 Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) { 1912 return new Node<>(p.hash, p.key, p.value, next); 1913 } 1914 1915 // Create a tree bin node newTreeNode(int hash, K key, V value, Node<K,V> next)1916 TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) { 1917 return new TreeNode<>(hash, key, value, next); 1918 } 1919 1920 // For treeifyBin replacementTreeNode(Node<K,V> p, Node<K,V> next)1921 TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) { 1922 return new TreeNode<>(p.hash, p.key, p.value, next); 1923 } 1924 1925 /** 1926 * Reset to initial default state. Called by clone and readObject. 1927 */ reinitialize()1928 void reinitialize() { 1929 table = null; 1930 entrySet = null; 1931 keySet = null; 1932 values = null; 1933 modCount = 0; 1934 threshold = 0; 1935 size = 0; 1936 } 1937 1938 // Callbacks to allow LinkedHashMap post-actions afterNodeAccess(Node<K,V> p)1939 void afterNodeAccess(Node<K,V> p) { } afterNodeInsertion(boolean evict)1940 void afterNodeInsertion(boolean evict) { } afterNodeRemoval(Node<K,V> p)1941 void afterNodeRemoval(Node<K,V> p) { } 1942 1943 // Called only from writeObject, to ensure compatible ordering. internalWriteEntries(java.io.ObjectOutputStream s)1944 void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException { 1945 Node<K,V>[] tab; 1946 if (size > 0 && (tab = table) != null) { 1947 for (Node<K,V> e : tab) { 1948 for (; e != null; e = e.next) { 1949 s.writeObject(e.key); 1950 s.writeObject(e.value); 1951 } 1952 } 1953 } 1954 } 1955 1956 /* ------------------------------------------------------------ */ 1957 // Tree bins 1958 1959 /** 1960 * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn 1961 * extends Node) so can be used as extension of either regular or 1962 * linked node. 1963 */ 1964 static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> { 1965 TreeNode<K,V> parent; // red-black tree links 1966 TreeNode<K,V> left; 1967 TreeNode<K,V> right; 1968 TreeNode<K,V> prev; // needed to unlink next upon deletion 1969 boolean red; TreeNode(int hash, K key, V val, Node<K,V> next)1970 TreeNode(int hash, K key, V val, Node<K,V> next) { 1971 super(hash, key, val, next); 1972 } 1973 1974 /** 1975 * Returns root of tree containing this node. 1976 */ root()1977 final TreeNode<K,V> root() { 1978 for (TreeNode<K,V> r = this, p;;) { 1979 if ((p = r.parent) == null) 1980 return r; 1981 r = p; 1982 } 1983 } 1984 1985 /** 1986 * Ensures that the given root is the first node of its bin. 1987 */ moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root)1988 static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) { 1989 int n; 1990 if (root != null && tab != null && (n = tab.length) > 0) { 1991 int index = (n - 1) & root.hash; 1992 TreeNode<K,V> first = (TreeNode<K,V>)tab[index]; 1993 if (root != first) { 1994 Node<K,V> rn; 1995 tab[index] = root; 1996 TreeNode<K,V> rp = root.prev; 1997 if ((rn = root.next) != null) 1998 ((TreeNode<K,V>)rn).prev = rp; 1999 if (rp != null) 2000 rp.next = rn; 2001 if (first != null) 2002 first.prev = root; 2003 root.next = first; 2004 root.prev = null; 2005 } 2006 assert checkInvariants(root); 2007 } 2008 } 2009 2010 /** 2011 * Finds the node starting at root p with the given hash and key. 2012 * The kc argument caches comparableClassFor(key) upon first use 2013 * comparing keys. 2014 */ find(int h, Object k, Class<?> kc)2015 final TreeNode<K,V> find(int h, Object k, Class<?> kc) { 2016 TreeNode<K,V> p = this; 2017 do { 2018 int ph, dir; K pk; 2019 TreeNode<K,V> pl = p.left, pr = p.right, q; 2020 if ((ph = p.hash) > h) 2021 p = pl; 2022 else if (ph < h) 2023 p = pr; 2024 else if ((pk = p.key) == k || (k != null && k.equals(pk))) 2025 return p; 2026 else if (pl == null) 2027 p = pr; 2028 else if (pr == null) 2029 p = pl; 2030 else if ((kc != null || 2031 (kc = comparableClassFor(k)) != null) && 2032 (dir = compareComparables(kc, k, pk)) != 0) 2033 p = (dir < 0) ? pl : pr; 2034 else if ((q = pr.find(h, k, kc)) != null) 2035 return q; 2036 else 2037 p = pl; 2038 } while (p != null); 2039 return null; 2040 } 2041 2042 /** 2043 * Calls find for root node. 2044 */ getTreeNode(int h, Object k)2045 final TreeNode<K,V> getTreeNode(int h, Object k) { 2046 return ((parent != null) ? root() : this).find(h, k, null); 2047 } 2048 2049 /** 2050 * Tie-breaking utility for ordering insertions when equal 2051 * hashCodes and non-comparable. We don't require a total 2052 * order, just a consistent insertion rule to maintain 2053 * equivalence across rebalancings. Tie-breaking further than 2054 * necessary simplifies testing a bit. 2055 */ tieBreakOrder(Object a, Object b)2056 static int tieBreakOrder(Object a, Object b) { 2057 int d; 2058 if (a == null || b == null || 2059 (d = a.getClass().getName(). 2060 compareTo(b.getClass().getName())) == 0) 2061 d = (System.identityHashCode(a) <= System.identityHashCode(b) ? 2062 -1 : 1); 2063 return d; 2064 } 2065 2066 /** 2067 * Forms tree of the nodes linked from this node. 2068 */ treeify(Node<K,V>[] tab)2069 final void treeify(Node<K,V>[] tab) { 2070 TreeNode<K,V> root = null; 2071 for (TreeNode<K,V> x = this, next; x != null; x = next) { 2072 next = (TreeNode<K,V>)x.next; 2073 x.left = x.right = null; 2074 if (root == null) { 2075 x.parent = null; 2076 x.red = false; 2077 root = x; 2078 } 2079 else { 2080 K k = x.key; 2081 int h = x.hash; 2082 Class<?> kc = null; 2083 for (TreeNode<K,V> p = root;;) { 2084 int dir, ph; 2085 K pk = p.key; 2086 if ((ph = p.hash) > h) 2087 dir = -1; 2088 else if (ph < h) 2089 dir = 1; 2090 else if ((kc == null && 2091 (kc = comparableClassFor(k)) == null) || 2092 (dir = compareComparables(kc, k, pk)) == 0) 2093 dir = tieBreakOrder(k, pk); 2094 2095 TreeNode<K,V> xp = p; 2096 if ((p = (dir <= 0) ? p.left : p.right) == null) { 2097 x.parent = xp; 2098 if (dir <= 0) 2099 xp.left = x; 2100 else 2101 xp.right = x; 2102 root = balanceInsertion(root, x); 2103 break; 2104 } 2105 } 2106 } 2107 } 2108 moveRootToFront(tab, root); 2109 } 2110 2111 /** 2112 * Returns a list of non-TreeNodes replacing those linked from 2113 * this node. 2114 */ untreeify(HashMap<K,V> map)2115 final Node<K,V> untreeify(HashMap<K,V> map) { 2116 Node<K,V> hd = null, tl = null; 2117 for (Node<K,V> q = this; q != null; q = q.next) { 2118 Node<K,V> p = map.replacementNode(q, null); 2119 if (tl == null) 2120 hd = p; 2121 else 2122 tl.next = p; 2123 tl = p; 2124 } 2125 return hd; 2126 } 2127 2128 /** 2129 * Tree version of putVal. 2130 */ putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, int h, K k, V v)2131 final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, 2132 int h, K k, V v) { 2133 Class<?> kc = null; 2134 boolean searched = false; 2135 TreeNode<K,V> root = (parent != null) ? root() : this; 2136 for (TreeNode<K,V> p = root;;) { 2137 int dir, ph; K pk; 2138 if ((ph = p.hash) > h) 2139 dir = -1; 2140 else if (ph < h) 2141 dir = 1; 2142 else if ((pk = p.key) == k || (k != null && k.equals(pk))) 2143 return p; 2144 else if ((kc == null && 2145 (kc = comparableClassFor(k)) == null) || 2146 (dir = compareComparables(kc, k, pk)) == 0) { 2147 if (!searched) { 2148 TreeNode<K,V> q, ch; 2149 searched = true; 2150 if (((ch = p.left) != null && 2151 (q = ch.find(h, k, kc)) != null) || 2152 ((ch = p.right) != null && 2153 (q = ch.find(h, k, kc)) != null)) 2154 return q; 2155 } 2156 dir = tieBreakOrder(k, pk); 2157 } 2158 2159 TreeNode<K,V> xp = p; 2160 if ((p = (dir <= 0) ? p.left : p.right) == null) { 2161 Node<K,V> xpn = xp.next; 2162 TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn); 2163 if (dir <= 0) 2164 xp.left = x; 2165 else 2166 xp.right = x; 2167 xp.next = x; 2168 x.parent = x.prev = xp; 2169 if (xpn != null) 2170 ((TreeNode<K,V>)xpn).prev = x; 2171 moveRootToFront(tab, balanceInsertion(root, x)); 2172 return null; 2173 } 2174 } 2175 } 2176 2177 /** 2178 * Removes the given node, that must be present before this call. 2179 * This is messier than typical red-black deletion code because we 2180 * cannot swap the contents of an interior node with a leaf 2181 * successor that is pinned by "next" pointers that are accessible 2182 * independently during traversal. So instead we swap the tree 2183 * linkages. If the current tree appears to have too few nodes, 2184 * the bin is converted back to a plain bin. (The test triggers 2185 * somewhere between 2 and 6 nodes, depending on tree structure). 2186 */ removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab, boolean movable)2187 final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab, 2188 boolean movable) { 2189 int n; 2190 if (tab == null || (n = tab.length) == 0) 2191 return; 2192 int index = (n - 1) & hash; 2193 TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl; 2194 TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev; 2195 if (pred == null) 2196 tab[index] = first = succ; 2197 else 2198 pred.next = succ; 2199 if (succ != null) 2200 succ.prev = pred; 2201 if (first == null) 2202 return; 2203 if (root.parent != null) 2204 root = root.root(); 2205 if (root == null 2206 || (movable 2207 && (root.right == null 2208 || (rl = root.left) == null 2209 || rl.left == null))) { 2210 tab[index] = first.untreeify(map); // too small 2211 return; 2212 } 2213 TreeNode<K,V> p = this, pl = left, pr = right, replacement; 2214 if (pl != null && pr != null) { 2215 TreeNode<K,V> s = pr, sl; 2216 while ((sl = s.left) != null) // find successor 2217 s = sl; 2218 boolean c = s.red; s.red = p.red; p.red = c; // swap colors 2219 TreeNode<K,V> sr = s.right; 2220 TreeNode<K,V> pp = p.parent; 2221 if (s == pr) { // p was s's direct parent 2222 p.parent = s; 2223 s.right = p; 2224 } 2225 else { 2226 TreeNode<K,V> sp = s.parent; 2227 if ((p.parent = sp) != null) { 2228 if (s == sp.left) 2229 sp.left = p; 2230 else 2231 sp.right = p; 2232 } 2233 if ((s.right = pr) != null) 2234 pr.parent = s; 2235 } 2236 p.left = null; 2237 if ((p.right = sr) != null) 2238 sr.parent = p; 2239 if ((s.left = pl) != null) 2240 pl.parent = s; 2241 if ((s.parent = pp) == null) 2242 root = s; 2243 else if (p == pp.left) 2244 pp.left = s; 2245 else 2246 pp.right = s; 2247 if (sr != null) 2248 replacement = sr; 2249 else 2250 replacement = p; 2251 } 2252 else if (pl != null) 2253 replacement = pl; 2254 else if (pr != null) 2255 replacement = pr; 2256 else 2257 replacement = p; 2258 if (replacement != p) { 2259 TreeNode<K,V> pp = replacement.parent = p.parent; 2260 if (pp == null) 2261 (root = replacement).red = false; 2262 else if (p == pp.left) 2263 pp.left = replacement; 2264 else 2265 pp.right = replacement; 2266 p.left = p.right = p.parent = null; 2267 } 2268 2269 TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement); 2270 2271 if (replacement == p) { // detach 2272 TreeNode<K,V> pp = p.parent; 2273 p.parent = null; 2274 if (pp != null) { 2275 if (p == pp.left) 2276 pp.left = null; 2277 else if (p == pp.right) 2278 pp.right = null; 2279 } 2280 } 2281 if (movable) 2282 moveRootToFront(tab, r); 2283 } 2284 2285 /** 2286 * Splits nodes in a tree bin into lower and upper tree bins, 2287 * or untreeifies if now too small. Called only from resize; 2288 * see above discussion about split bits and indices. 2289 * 2290 * @param map the map 2291 * @param tab the table for recording bin heads 2292 * @param index the index of the table being split 2293 * @param bit the bit of hash to split on 2294 */ split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit)2295 final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) { 2296 TreeNode<K,V> b = this; 2297 // Relink into lo and hi lists, preserving order 2298 TreeNode<K,V> loHead = null, loTail = null; 2299 TreeNode<K,V> hiHead = null, hiTail = null; 2300 int lc = 0, hc = 0; 2301 for (TreeNode<K,V> e = b, next; e != null; e = next) { 2302 next = (TreeNode<K,V>)e.next; 2303 e.next = null; 2304 if ((e.hash & bit) == 0) { 2305 if ((e.prev = loTail) == null) 2306 loHead = e; 2307 else 2308 loTail.next = e; 2309 loTail = e; 2310 ++lc; 2311 } 2312 else { 2313 if ((e.prev = hiTail) == null) 2314 hiHead = e; 2315 else 2316 hiTail.next = e; 2317 hiTail = e; 2318 ++hc; 2319 } 2320 } 2321 2322 if (loHead != null) { 2323 if (lc <= UNTREEIFY_THRESHOLD) 2324 tab[index] = loHead.untreeify(map); 2325 else { 2326 tab[index] = loHead; 2327 if (hiHead != null) // (else is already treeified) 2328 loHead.treeify(tab); 2329 } 2330 } 2331 if (hiHead != null) { 2332 if (hc <= UNTREEIFY_THRESHOLD) 2333 tab[index + bit] = hiHead.untreeify(map); 2334 else { 2335 tab[index + bit] = hiHead; 2336 if (loHead != null) 2337 hiHead.treeify(tab); 2338 } 2339 } 2340 } 2341 2342 /* ------------------------------------------------------------ */ 2343 // Red-black tree methods, all adapted from CLR 2344 rotateLeft(TreeNode<K,V> root, TreeNode<K,V> p)2345 static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root, 2346 TreeNode<K,V> p) { 2347 TreeNode<K,V> r, pp, rl; 2348 if (p != null && (r = p.right) != null) { 2349 if ((rl = p.right = r.left) != null) 2350 rl.parent = p; 2351 if ((pp = r.parent = p.parent) == null) 2352 (root = r).red = false; 2353 else if (pp.left == p) 2354 pp.left = r; 2355 else 2356 pp.right = r; 2357 r.left = p; 2358 p.parent = r; 2359 } 2360 return root; 2361 } 2362 rotateRight(TreeNode<K,V> root, TreeNode<K,V> p)2363 static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root, 2364 TreeNode<K,V> p) { 2365 TreeNode<K,V> l, pp, lr; 2366 if (p != null && (l = p.left) != null) { 2367 if ((lr = p.left = l.right) != null) 2368 lr.parent = p; 2369 if ((pp = l.parent = p.parent) == null) 2370 (root = l).red = false; 2371 else if (pp.right == p) 2372 pp.right = l; 2373 else 2374 pp.left = l; 2375 l.right = p; 2376 p.parent = l; 2377 } 2378 return root; 2379 } 2380 balanceInsertion(TreeNode<K,V> root, TreeNode<K,V> x)2381 static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root, 2382 TreeNode<K,V> x) { 2383 x.red = true; 2384 for (TreeNode<K,V> xp, xpp, xppl, xppr;;) { 2385 if ((xp = x.parent) == null) { 2386 x.red = false; 2387 return x; 2388 } 2389 else if (!xp.red || (xpp = xp.parent) == null) 2390 return root; 2391 if (xp == (xppl = xpp.left)) { 2392 if ((xppr = xpp.right) != null && xppr.red) { 2393 xppr.red = false; 2394 xp.red = false; 2395 xpp.red = true; 2396 x = xpp; 2397 } 2398 else { 2399 if (x == xp.right) { 2400 root = rotateLeft(root, x = xp); 2401 xpp = (xp = x.parent) == null ? null : xp.parent; 2402 } 2403 if (xp != null) { 2404 xp.red = false; 2405 if (xpp != null) { 2406 xpp.red = true; 2407 root = rotateRight(root, xpp); 2408 } 2409 } 2410 } 2411 } 2412 else { 2413 if (xppl != null && xppl.red) { 2414 xppl.red = false; 2415 xp.red = false; 2416 xpp.red = true; 2417 x = xpp; 2418 } 2419 else { 2420 if (x == xp.left) { 2421 root = rotateRight(root, x = xp); 2422 xpp = (xp = x.parent) == null ? null : xp.parent; 2423 } 2424 if (xp != null) { 2425 xp.red = false; 2426 if (xpp != null) { 2427 xpp.red = true; 2428 root = rotateLeft(root, xpp); 2429 } 2430 } 2431 } 2432 } 2433 } 2434 } 2435 balanceDeletion(TreeNode<K,V> root, TreeNode<K,V> x)2436 static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root, 2437 TreeNode<K,V> x) { 2438 for (TreeNode<K,V> xp, xpl, xpr;;) { 2439 if (x == null || x == root) 2440 return root; 2441 else if ((xp = x.parent) == null) { 2442 x.red = false; 2443 return x; 2444 } 2445 else if (x.red) { 2446 x.red = false; 2447 return root; 2448 } 2449 else if ((xpl = xp.left) == x) { 2450 if ((xpr = xp.right) != null && xpr.red) { 2451 xpr.red = false; 2452 xp.red = true; 2453 root = rotateLeft(root, xp); 2454 xpr = (xp = x.parent) == null ? null : xp.right; 2455 } 2456 if (xpr == null) 2457 x = xp; 2458 else { 2459 TreeNode<K,V> sl = xpr.left, sr = xpr.right; 2460 if ((sr == null || !sr.red) && 2461 (sl == null || !sl.red)) { 2462 xpr.red = true; 2463 x = xp; 2464 } 2465 else { 2466 if (sr == null || !sr.red) { 2467 if (sl != null) 2468 sl.red = false; 2469 xpr.red = true; 2470 root = rotateRight(root, xpr); 2471 xpr = (xp = x.parent) == null ? 2472 null : xp.right; 2473 } 2474 if (xpr != null) { 2475 xpr.red = (xp == null) ? false : xp.red; 2476 if ((sr = xpr.right) != null) 2477 sr.red = false; 2478 } 2479 if (xp != null) { 2480 xp.red = false; 2481 root = rotateLeft(root, xp); 2482 } 2483 x = root; 2484 } 2485 } 2486 } 2487 else { // symmetric 2488 if (xpl != null && xpl.red) { 2489 xpl.red = false; 2490 xp.red = true; 2491 root = rotateRight(root, xp); 2492 xpl = (xp = x.parent) == null ? null : xp.left; 2493 } 2494 if (xpl == null) 2495 x = xp; 2496 else { 2497 TreeNode<K,V> sl = xpl.left, sr = xpl.right; 2498 if ((sl == null || !sl.red) && 2499 (sr == null || !sr.red)) { 2500 xpl.red = true; 2501 x = xp; 2502 } 2503 else { 2504 if (sl == null || !sl.red) { 2505 if (sr != null) 2506 sr.red = false; 2507 xpl.red = true; 2508 root = rotateLeft(root, xpl); 2509 xpl = (xp = x.parent) == null ? 2510 null : xp.left; 2511 } 2512 if (xpl != null) { 2513 xpl.red = (xp == null) ? false : xp.red; 2514 if ((sl = xpl.left) != null) 2515 sl.red = false; 2516 } 2517 if (xp != null) { 2518 xp.red = false; 2519 root = rotateRight(root, xp); 2520 } 2521 x = root; 2522 } 2523 } 2524 } 2525 } 2526 } 2527 2528 /** 2529 * Recursive invariant check 2530 */ checkInvariants(TreeNode<K,V> t)2531 static <K,V> boolean checkInvariants(TreeNode<K,V> t) { 2532 TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right, 2533 tb = t.prev, tn = (TreeNode<K,V>)t.next; 2534 if (tb != null && tb.next != t) 2535 return false; 2536 if (tn != null && tn.prev != t) 2537 return false; 2538 if (tp != null && t != tp.left && t != tp.right) 2539 return false; 2540 if (tl != null && (tl.parent != t || tl.hash > t.hash)) 2541 return false; 2542 if (tr != null && (tr.parent != t || tr.hash < t.hash)) 2543 return false; 2544 if (t.red && tl != null && tl.red && tr != null && tr.red) 2545 return false; 2546 if (tl != null && !checkInvariants(tl)) 2547 return false; 2548 if (tr != null && !checkInvariants(tr)) 2549 return false; 2550 return true; 2551 } 2552 } 2553 2554 /** 2555 * Calculate initial capacity for HashMap based classes, from expected size and default load factor (0.75). 2556 * 2557 * @param numMappings the expected number of mappings 2558 * @return initial capacity for HashMap based classes. 2559 * @since 19 2560 */ calculateHashMapCapacity(int numMappings)2561 static int calculateHashMapCapacity(int numMappings) { 2562 return (int) Math.ceil(numMappings / (double) DEFAULT_LOAD_FACTOR); 2563 } 2564 2565 /** 2566 * Creates a new, empty HashMap suitable for the expected number of mappings. 2567 * The returned map uses the default load factor of 0.75, and its initial capacity is 2568 * generally large enough so that the expected number of mappings can be added 2569 * without resizing the map. 2570 * 2571 * @param numMappings the expected number of mappings 2572 * @param <K> the type of keys maintained by the new map 2573 * @param <V> the type of mapped values 2574 * @return the newly created map 2575 * @throws IllegalArgumentException if numMappings is negative 2576 * @since 19 2577 */ newHashMap(int numMappings)2578 public static <K, V> HashMap<K, V> newHashMap(int numMappings) { 2579 if (numMappings < 0) { 2580 throw new IllegalArgumentException("Negative number of mappings: " + numMappings); 2581 } 2582 return new HashMap<>(calculateHashMapCapacity(numMappings)); 2583 } 2584 2585 } 2586