aboutsummaryrefslogtreecommitdiffstats
path: root/contrib/libs/sparsehash/README
blob: 7c7dc26f777584057b88af23b9d546dfad665a1e (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
This directory contains several hash-map implementations, similar in 
API to SGI's hash_map class, but with different performance 
characteristics.  sparse_hash_map uses very little space overhead, 1-2 
bits per entry.  dense_hash_map is very fast, particulary on lookup. 
(sparse_hash_set and dense_hash_set are the set versions of these 
routines.)  On the other hand, these classes have requirements that 
may not make them appropriate for all applications. 
 
All these implementation use a hashtable with internal quadratic 
probing.  This method is space-efficient -- there is no pointer 
overhead -- and time-efficient for good hash functions. 
 
COMPILING 
--------- 
To compile test applications with these classes, run ./configure 
followed by make.  To install these header files on your system, run 
'make install'.  (On Windows, the instructions are different; see 
README_windows.txt.)  See INSTALL for more details. 
 
This code should work on any modern C++ system.  It has been tested on 
Linux (Ubuntu, Fedora, RedHat, Debian), Solaris 10 x86, FreeBSD 6.0, 
OS X 10.3 and 10.4, and Windows under both VC++7 and VC++8. 
 
USING 
----- 
See the html files in the doc directory for small example programs 
that use these classes.  It's enough to just include the header file: 
 
   #include <sparsehash/sparse_hash_map> // or sparse_hash_set, dense_hash_map, ... 
   google::sparse_hash_set<int, int> number_mapper; 
 
and use the class the way you would other hash-map implementations. 
(Though see "API" below for caveats.) 
 
By default (you can change it via a flag to ./configure), these hash 
implementations are defined in the google namespace. 
 
API 
--- 
The API for sparse_hash_map, dense_hash_map, sparse_hash_set, and 
dense_hash_set, are a superset of the API of SGI's hash_map class. 
See doc/sparse_hash_map.html, et al., for more information about the 
API. 
 
The usage of these classes differ from SGI's hash_map, and other 
hashtable implementations, in the following major ways: 
 
1) dense_hash_map requires you to set aside one key value as the 
   'empty bucket' value, set via the set_empty_key() method.  This 
   *MUST* be called before you can use the dense_hash_map.  It is 
   illegal to insert any elements into a dense_hash_map whose key is 
   equal to the empty-key. 
 
2) For both dense_hash_map and sparse_hash_map, if you wish to delete 
   elements from the hashtable, you must set aside a key value as the 
   'deleted bucket' value, set via the set_deleted_key() method.  If 
   your hash-map is insert-only, there is no need to call this 
   method.  If you call set_deleted_key(), it is illegal to insert any 
   elements into a dense_hash_map or sparse_hash_map whose key is 
   equal to the deleted-key. 
 
3) These hash-map implementation support I/O.  See below. 
 
There are also some smaller differences: 
 
1) The constructor takes an optional argument that specifies the 
   number of elements you expect to insert into the hashtable.  This 
   differs from SGI's hash_map implementation, which takes an optional 
   number of buckets. 
 
2) erase() does not immediately reclaim memory.  As a consequence, 
   erase() does not invalidate any iterators, making loops like this 
   correct: 
      for (it = ht.begin(); it != ht.end(); ++it) 
        if (...) ht.erase(it); 
   As another consequence, a series of erase() calls can leave your 
   hashtable using more memory than it needs to.  The hashtable will 
   automatically compact at the next call to insert(), but to 
   manually compact a hashtable, you can call 
      ht.resize(0) 
 
I/O 
--- 
In addition to the normal hash-map operations, sparse_hash_map can 
read and write hashtables to disk.  (dense_hash_map also has the API, 
but it has not yet been implemented, and writes will always fail.) 
 
In the simplest case, writing a hashtable is as easy as calling two 
methods on the hashtable: 
   ht.write_metadata(fp); 
   ht.write_nopointer_data(fp); 
 
Reading in this data is equally simple: 
   google::sparse_hash_map<...> ht; 
   ht.read_metadata(fp); 
   ht.read_nopointer_data(fp); 
 
The above is sufficient if the key and value do not contain any 
pointers: they are basic C types or agglomorations of basic C types. 
If the key and/or value do contain pointers, you can still store the 
hashtable by replacing write_nopointer_data() with a custom writing 
routine.  See sparse_hash_map.html et al. for more information. 
 
SPARSETABLE 
----------- 
In addition to the hash-map and hash-set classes, this package also 
provides sparsetable.h, an array implementation that uses space 
proportional to the number of elements in the array, rather than the 
maximum element index.  It uses very little space overhead: 2 to 5
bits per entry.  See doc/sparsetable.html for the API.
 
RESOURCE USAGE 
-------------- 
* sparse_hash_map has memory overhead of about 4 to 10 bits per 
  hash-map entry, assuming a typical average occupancy of 50%.
* dense_hash_map has a factor of 2-3 memory overhead: if your 
  hashtable data takes X bytes, dense_hash_map will use 3X-4X memory 
  total. 
 
Hashtables tend to double in size when resizing, creating an 
additional 50% space overhead.  dense_hash_map does in fact have a 
significant "high water mark" memory use requirement, which is 6 times
the size of hash entries in the table when resizing (when reaching 
50% occupancy, the table resizes to double the previous size, and the 
old table (2x) is copied to the new table (4x)).

sparse_hash_map, however, is written to need very little space 
overhead when resizing: only a few bits per hashtable entry. 
 
PERFORMANCE 
----------- 
You can compile and run the included file time_hash_map.cc to examine 
the performance of sparse_hash_map, dense_hash_map, and your native 
hash_map implementation on your system.  One test against the 
SGI hash_map implementation gave the following timing information for 
a simple find() call: 
   SGI hash_map:     22 ns 
   dense_hash_map:   13 ns 
   sparse_hash_map: 117 ns 
   SGI map:         113 ns 
 
See doc/performance.html for more detailed charts on resource usage 
and performance data. 
 
--- 
16 March 2005 
(Last updated: 12 September 2010)