<rdar://problem/10568905>
Fixed an issue where our new accelerator tables could cause a crash when we got a full 32 bit hash match, yet a C string mismatch. We had a member variable in DWARFMappedHash::Prologue named "min_hash_data_byte_size" the would compute the byte size of HashData so we could skip hash data efficiently. It started out with a byte size value of 4. When we read the table in from disk, we would clear the atom array and read it from disk, and the byte size would still be set to 4. We would then, as we read each atom from disk, increment this count. So the byte size of the HashData was off, which means when we get a lookup whose 32 bit hash does matches, but the C string does NOT match (which is very very rare), then we try and skip the data for that hash and we would add an incorrect offset and get off in our parsing of the hash data and cause this crash. To fix this I added a few safeguards: 1 - I now correctly clear the hash data size when we reset the atom array using the new DWARFMappedHash::Prologue::ClearAtoms() function. 2 - I now correctly always let the AppendAtom() calculate the byte size of the hash (before we were doing things manually some times, which was correct, but not good) 3 - I also track if the size of each HashData is a fixed byte size or not, and "do the right thing" when we need to skip the data. 4 - If we do get off in the weeds, then I make sure to return an error and stop any further parsing from happening. llvm-svn: 147334
Loading
Please register or sign in to comment