We ran into a problem where a 1.43GB corrupt 7z file tried to allocate about 138 million SevenZArchiveEntries which will use about 12GB of memory. Sadly I'm unable to share the file. If you have enough Memory available the following exception is thrown.
java.io.IOException: Start header corrupt and unable to guess end Header
7z itself aborts really quick when I'm trying to list the content of the file.
7z l "corrupt.7z"
Scanning the drive for archives:
1 file, 1537752212 bytes (1467 MiB)
Listing archive: corrupt.7z
ERROR: corrupt.7z : corrupt.7z
Open ERROR: Can not open the file as [7z] archive
Is not archive
I hacked together the attached patch which will reduce the memory allocation to about 1GB. So lazy instantiation of the entries could be a good solution to the problem. Optimal would be to only create the entries if the headers could be parsed correctly.
This message was sent by Atlassian Jira