Quote (Mastersam93 @ May 19 2017 12:24pm)
That just blew my mind.
But 9 times out of 10 compressing files is just to save disk space or bandwidth. If you need work with the contents of a directory is there a reason you can just decompress it entirely first?
Directory size is one issue.
The reason for the memory constraint is because any library that handles loading of files itself will not work on the system. I would either have to load the entire archive into memory or some how wrap their loading calls around the systems sdk.
That's why I either want to manipulate the way the library reads the file so I can use the sdk read calls, or read the entire glob into memory which the library can do what ever it pleases with it.
Then again loading an entire file into memory may not be the best solution either.
Plus archives isn't only about compression.
Archiving is a way to store many files in a container. Compression is an after effect that most containers (except like tar) apply because it makes sense to do so.
Edit:: Anyways I decided to take a different approach. I'm going to make a web frontend that the application on the system can work with. This offloads the entire archive problem, memory, and file size problems to a remote standardized server while still having the functionality on the device.
This post was edited by AbDuCt on May 19 2017 10:44am