Jelajahi Sumber

Complete API

Arjun Barrett 4 tahun lalu
induk
melakukan
fcabaac487
11 mengubah file dengan 1499 tambahan dan 615 penghapusan
  1. 0 3
      .gitignore
  2. 1 0
      .npmignore
  3. 113 2
      README.md
  4. 152 0
      docs/README.md
  5. 55 0
      docs/interfaces/deflateoptions.md
  6. 81 0
      docs/interfaces/gzipoptions.md
  7. 57 0
      docs/interfaces/zliboptions.md
  8. 13 3
      package.json
  9. 0 606
      src/flate.ts
  10. 779 1
      src/index.ts
  11. 248 0
      yarn.lock

+ 0 - 3
.gitignore

@@ -1,7 +1,4 @@
 node_modules/
 lib/
 esm/
-# following two are temporary before official tests
-tmp/
-test.js
 .DS_STORE

+ 1 - 0
.npmignore

@@ -1,4 +1,5 @@
 *
 !lib/
+!esm/
 !package.json
 !README.md

+ 113 - 2
README.md

@@ -1,4 +1,115 @@
 # fflate
-Native performance (de)compression in a 3 kB package
+High performance (de)compression in an 8kB package
 
-## Purpose
+## Why fflate?
+`fflate` (short for fast flate) is the **fastest, smallest, and most versatile** pure JavaScript compression and decompression library in existence, handily beating [`pako`](https://npmjs.com/package/pako), [`tiny-inflate`](https://npmjs.com/package/tiny-inflate), and [`UZIP.js`](https://github.com/photopea/UZIP.js) in performance benchmarks while being multiple times more lightweight. It includes support for DEFLATE, GZIP, and Zlib data. Data compressed by `fflate` can be decompressed by other tools, and vice versa.
+
+|                        | `pako` | `tiny-inflate`       | `UZIP.js`         | `fflate`                       |
+|------------------------|--------|----------------------|-------------------|--------------------------------|
+| Relative performance   | 1x     | up to 10x slower     | up to 40% faster  | **Up to 60% faster**           |
+| Bundle size (minified) | 44.5kB | **3 kB**             | 14.2kB            | 8kB **(3kB for only inflate)** |
+| Compression support    | ✅     | ❌                    | ✅                | ✅                             |
+| Thread/Worker safe     | ✅     | ✅                    | ❌                | ✅                             |
+| GZIP/Zlib support      | ✅     | ❌                    | ❌                | ✅                             |
+| Uses ES Modules        | ❌     | ❌                    | ❌                | ✅                             |
+
+## Usage
+
+Install `fflate`:
+```console
+npm install --save fflate
+```
+or
+```console
+yarn add fflate
+```
+
+Import:
+```js
+import * as fflate from 'fflate';
+// ALWAYS import only what you need to minimize bundle size.
+// So, if you just need gzip support:
+import { gzip, gunzip } from 'fflate';
+```
+Or `require` (if your environment doesn't support ES Modules):
+```js
+const fflate = require('fflate');
+```
+
+And use:
+```js
+// This is an ArrayBuffer of data
+const massiveFileBuf = await fetch('/getAMassiveFile').then(
+  res => res.arrayBuffer()
+);
+// To use fflate, you need a Uint8Array
+const massiveFile = new Uint8Array(massiveFileBuf);
+// Note that the Node.js Buffer works just fine as well:
+// const massiveFile = require('fs').readFileSync('aMassiveFile.txt');
+
+const notSoMassive = fflate.zlib(massiveFile, { level: 9 });
+const massiveAgain = fflate.unzlib(notSoMassive);
+```
+`fflate` can autodetect a compressed file's format as well:
+```js
+const compressed = new Uint8Array(
+  await fetch('/unknownFormatCompressedFile').then(res => res.arrayBuffer())
+);
+// Again, Node.js buffers work too. For example, the above could instead be:
+// Buffer.from('H4sIAAAAAAAA//NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=', 'base64');
+
+const decompressed = fflate.decompress(compressed);
+```
+
+Using strings is easy with `TextEncoder` and `TextDecoder`:
+```js
+const enc = new TextEncoder(), dec = new TextDecoder();
+const buf = enc.encode('Hello world!');
+// The default compression method is gzip
+// See the docs for more info on the mem option
+const compressed = fflate.compress(buf, { level: 6, mem: 8 });
+
+// When you need to decompress:
+const decompressed = fflate.decompress(compressed);
+const origText = dec.decode(decompressed);
+console.log(origText); // Hello world!
+```
+Note that encoding the compressed data as a string, like in `pako`, is not nearly as efficient as binary for data transfer. However, you can do it:
+```js
+const compressedDataToString = data => {
+  let result = '';
+  for (let value of data) {
+    result += String.fromCharCode(data);
+  }
+  return result;
+}
+const stringToCompressedData = str => {
+  let result = new Uint8Array(str.length);
+  for (let i = 0; i < str.length; ++i)
+    result[i] = str.charCodeAt(i);
+  return result.
+}
+const compressedString = compressedDataToString(fflate.compress(buf));
+const decompressed = fflate.decompress(stringToCompressedData(compressedString));
+```
+
+See the [documentation](https://github.com/101arrowz/fflate/blob/master/docs/README.md) for more detailed information about the API.
+
+## What makes `fflate` so fast?
+There are many reasons one might need a compression/decompression library; for example, if a user is uploading a massive file (say a 50 MB PDF) to your server, instead of uploading directly, it's usually faster to compress the file before uploading. Or if you want to generate a ZIP file to download to your user's computer, you also may need to compress it.
+
+For these reasons (and many more) many JavaScript compression/decompression libraries exist. However, the most popular one, [`pako`](https://npmjs.com/package/pako), is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means badly written, `pako` doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal. Moreover, even when minified, the library is 40 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.
+
+Note that there exist some small libraries like [`tiny-inflate`](https://npmjs.com/package/tiny-inflate) for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is extremely lackluster, up to 100x slower than `pako` for some larger files in my tests.
+
+[`UZIP.js`](https://github.com/photopea/UZIP.js) is both faster (by up to 40%) and smaller (15 kB minified) than `pako`, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use `UZIP.js`.
+
+So what makes `fflate` different? It takes the brilliant innovations of `UZIP.js` and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds, meaning that it can rival even `tiny-inflate` in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the entire build (3kB for decompression only and 5kB for compression only), is about 15% faster than `UZIP.js` or up to 60% faster than `pako`, and achieves the same or better compression ratio than the rest.
+
+Before you decide that `fflate` is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a compiled language. If you're willing to have 160 kB of extra weight and [much less browser support](https://caniuse.com/wasm), you can achieve around 30% more performance than `fflate` with a WASM build of Zlib like [`wasm-flate`](https://www.npmjs.com/package/wasm-flate). And if you're only using Node.js, just use the [native Zlib bindings](https://nodejs.org/api/zlib.html) that offer the best performance and compression ratios.
+
+## Browser support
+`fflate` makes heavy use of typed arrays (`Uint8Array`, `Uint16Array`, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them [is from 2011](https://caniuse.com/typedarrays), so I wouldn't bother.
+
+## License
+MIT

+ 152 - 0
docs/README.md

@@ -0,0 +1,152 @@
+# fflate
+
+## Index
+
+### Interfaces
+
+* [DeflateOptions](interfaces/deflateoptions.md)
+* [GZIPOptions](interfaces/gzipoptions.md)
+* [ZlibOptions](interfaces/zliboptions.md)
+
+### Functions
+
+* [decompress](README.md#decompress)
+* [deflate](README.md#deflate)
+* [gunzip](README.md#gunzip)
+* [gzip](README.md#gzip)
+* [inflate](README.md#inflate)
+* [unzlib](README.md#unzlib)
+* [zlib](README.md#zlib)
+
+## Functions
+
+### decompress
+
+▸ **decompress**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
+
+*Defined in [index.ts:775](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L775)*
+
+Expands compressed GZIP, Zlib, or raw DEFLATE data, automatically detecting the format
+
+#### Parameters:
+
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to decompress |
+`out?` | Uint8Array | Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length. |
+
+**Returns:** Uint8Array
+
+___
+
+### deflate
+
+▸ **deflate**(`data`: Uint8Array, `opts`: [DeflateOptions](interfaces/deflateoptions.md)): Uint8Array
+
+*Defined in [index.ts:681](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L681)*
+
+Compresses data with DEFLATE without any wrapper
+
+#### Parameters:
+
+Name | Type | Default value | Description |
+------ | ------ | ------ | ------ |
+`data` | Uint8Array | - | The data to compress |
+`opts` | [DeflateOptions](interfaces/deflateoptions.md) | {} | The compression options |
+
+**Returns:** Uint8Array
+
+___
+
+### gunzip
+
+▸ **gunzip**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
+
+*Defined in [index.ts:721](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L721)*
+
+Expands GZIP data
+
+#### Parameters:
+
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to decompress |
+`out?` | Uint8Array | Where to write the data. GZIP already encodes the output size, so providing this doesn't save memory. |
+
+**Returns:** Uint8Array
+
+___
+
+### gzip
+
+▸ **gzip**(`data`: Uint8Array, `opts`: [GZIPOptions](interfaces/gzipoptions.md)): Uint8Array
+
+*Defined in [index.ts:701](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L701)*
+
+Compresses data with GZIP
+
+#### Parameters:
+
+Name | Type | Default value | Description |
+------ | ------ | ------ | ------ |
+`data` | Uint8Array | - | The data to compress |
+`opts` | [GZIPOptions](interfaces/gzipoptions.md) | {} | The compression options |
+
+**Returns:** Uint8Array
+
+___
+
+### inflate
+
+▸ **inflate**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
+
+*Defined in [index.ts:691](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L691)*
+
+Expands DEFLATE data with no wrapper
+
+#### Parameters:
+
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to decompress |
+`out?` | Uint8Array | Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length. |
+
+**Returns:** Uint8Array
+
+___
+
+### unzlib
+
+▸ **unzlib**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
+
+*Defined in [index.ts:759](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L759)*
+
+Expands Zlib data
+
+#### Parameters:
+
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to decompress |
+`out?` | Uint8Array | Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length. |
+
+**Returns:** Uint8Array
+
+___
+
+### zlib
+
+▸ **zlib**(`data`: Uint8Array, `opts`: [ZlibOptions](interfaces/zliboptions.md)): Uint8Array
+
+*Defined in [index.ts:738](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L738)*
+
+Compress data with Zlib
+
+#### Parameters:
+
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to compress |
+`opts` | [ZlibOptions](interfaces/zliboptions.md) | The compression options |
+
+**Returns:** Uint8Array

+ 55 - 0
docs/interfaces/deflateoptions.md

@@ -0,0 +1,55 @@
+# Interface: DeflateOptions
+
+Options for compressing data into a DEFLATE format
+
+## Hierarchy
+
+* **DeflateOptions**
+
+  ↳ [GZIPOptions](gzipoptions.md)
+
+  ↳ [ZlibOptions](zliboptions.md)
+
+## Index
+
+### Properties
+
+* [level](deflateoptions.md#level)
+* [mem](deflateoptions.md#mem)
+
+## Properties
+
+### level
+
+• `Optional` **level**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9
+
+*Defined in [index.ts:633](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L633)*
+
+The level of compression to use, ranging from 0-9.
+
+0 will store the data without compression.
+1 is fastest but compresses the worst, 9 is slowest but compresses the best.
+The default level is 6.
+
+Typically, binary data benefits much more from higher values than text data.
+In both cases, higher values usually take disproportionately longer than the reduction in final size that results.
+
+For example, a 1 MB text file could:
+- become 1.01 MB with level 0 in 1ms
+- become 400 kB with level 1 in 10ms
+- become 320 kB with level 9 in 100ms
+
+___
+
+### mem
+
+• `Optional` **mem**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 \| 10 \| 11 \| 12
+
+*Defined in [index.ts:642](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L642)*
+
+The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
+
+Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
+It is recommended not to lower the value below 4, since that tends to hurt performance.
+
+The default value is automatically determined based on the size of the input data.

+ 81 - 0
docs/interfaces/gzipoptions.md

@@ -0,0 +1,81 @@
+# Interface: GZIPOptions
+
+Options for compressing data into a GZIP format
+
+## Hierarchy
+
+* [DeflateOptions](deflateoptions.md)
+
+  ↳ **GZIPOptions**
+
+## Index
+
+### Properties
+
+* [filename](gzipoptions.md#filename)
+* [level](gzipoptions.md#level)
+* [mem](gzipoptions.md#mem)
+* [mtime](gzipoptions.md#mtime)
+
+## Properties
+
+### filename
+
+• `Optional` **filename**: string
+
+*Defined in [index.ts:658](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L658)*
+
+The filename of the data. If the `gunzip` command is used to decompress the data, it will output a file
+with this name instead of the name of the compressed file.
+
+___
+
+### level
+
+• `Optional` **level**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9
+
+*Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
+
+*Defined in [index.ts:633](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L633)*
+
+The level of compression to use, ranging from 0-9.
+
+0 will store the data without compression.
+1 is fastest but compresses the worst, 9 is slowest but compresses the best.
+The default level is 6.
+
+Typically, binary data benefits much more from higher values than text data.
+In both cases, higher values usually take disproportionately longer than the reduction in final size that results.
+
+For example, a 1 MB text file could:
+- become 1.01 MB with level 0 in 1ms
+- become 400 kB with level 1 in 10ms
+- become 320 kB with level 9 in 100ms
+
+___
+
+### mem
+
+• `Optional` **mem**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 \| 10 \| 11 \| 12
+
+*Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
+
+*Defined in [index.ts:642](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L642)*
+
+The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
+
+Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
+It is recommended not to lower the value below 4, since that tends to hurt performance.
+
+The default value is automatically determined based on the size of the input data.
+
+___
+
+### mtime
+
+• `Optional` **mtime**: Date \| string \| number
+
+*Defined in [index.ts:653](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L653)*
+
+When the file was last modified. Defaults to the current time.
+Set this to 0 to avoid specifying a modification date entirely.

+ 57 - 0
docs/interfaces/zliboptions.md

@@ -0,0 +1,57 @@
+# Interface: ZlibOptions
+
+Options for compressing data into a Zlib format
+
+## Hierarchy
+
+* [DeflateOptions](deflateoptions.md)
+
+  ↳ **ZlibOptions**
+
+## Index
+
+### Properties
+
+* [level](zliboptions.md#level)
+* [mem](zliboptions.md#mem)
+
+## Properties
+
+### level
+
+• `Optional` **level**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9
+
+*Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
+
+*Defined in [index.ts:633](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L633)*
+
+The level of compression to use, ranging from 0-9.
+
+0 will store the data without compression.
+1 is fastest but compresses the worst, 9 is slowest but compresses the best.
+The default level is 6.
+
+Typically, binary data benefits much more from higher values than text data.
+In both cases, higher values usually take disproportionately longer than the reduction in final size that results.
+
+For example, a 1 MB text file could:
+- become 1.01 MB with level 0 in 1ms
+- become 400 kB with level 1 in 10ms
+- become 320 kB with level 9 in 100ms
+
+___
+
+### mem
+
+• `Optional` **mem**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 \| 10 \| 11 \| 12
+
+*Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
+
+*Defined in [index.ts:642](https://github.com/101arrowz/fflate/blob/3362e39/src/index.ts#L642)*
+
+The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
+
+Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
+It is recommended not to lower the value below 4, since that tends to hurt performance.
+
+The default value is automatically determined based on the size of the input data.

+ 13 - 3
package.json

@@ -1,20 +1,30 @@
 {
   "name": "fflate",
   "version": "0.0.1",
-  "description": "Native performance (de)compression in a 3kB package",
+  "description": "High performance (de)compression in an 8kB package",
   "main": "lib/index.js",
   "module": "esm/index.js",
   "types": "lib/index.d.ts",
   "repository": "https://github.com/101arrowz/fflate",
   "author": "Arjun Barrett",
   "license": "MIT",
+  "keywords": [
+    "gzip",
+    "gunzip",
+    "deflate",
+    "inflate",
+    "compression",
+    "decompression",
+    "zlib"
+  ],
   "scripts": {
-    "build": "tsc && tsc --project tsconfig.esm.json",
-    "test": "node test.js",
+    "build": "tsc && tsc --project tsconfig.esm.json && typedoc --mode library --plugin typedoc-plugin-markdown --hideProjectName --hideBreadcrumbs --readme none",
     "prepublish": "yarn build"
   },
   "devDependencies": {
     "pako": "^1.0.11",
+    "typedoc": "^0.17.0-3",
+    "typedoc-plugin-markdown": "^3.0.2",
     "typescript": "^4.0.2",
     "uzip": "^0.20200919.0"
   }

+ 0 - 606
src/flate.ts

@@ -1,606 +0,0 @@
-// DEFLATE is a complex format; to read this code, you should probably check the RFC first:
-// https://tools.ietf.org/html/rfc1951
-
-// Much of the following code is similar to that of UZIP.js:
-// https://github.com/photopea/UZIP.js
-// Many optimizations have been made, so the bundle size is ultimately smaller but performance is similar.
-
-// Sometimes 0 will appear where -1 would be more appropriate. This is because using a uint
-// is better for memory in most engines (I *think*).
-
-// aliases for shorter compressed code (most minifers don't do this)
-const u8 = Uint8Array, u16 = Uint16Array, u32 = Uint32Array;
-
-// fixed length extra bits
-const fleb = new u8([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0, /* unused */ 0, 0, /* impossible */ 0]);
-
-// fixed distance extra bits
-// see fleb note
-const fdeb = new u8([0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, /* unused */ 0, 0]);
-
-// code length index map
-const clim = new u8([16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15]);
-
-// get base, reverse index map from extra bits
-const freb = (eb: Uint8Array, start: number) => {
-  const b = new u16(31);
-  for (let i = 0; i < 31; ++i) {
-    b[i] = start += 1 << eb[i - 1];
-  }
-  // numbers here are at max 18 bits
-  const r = new u32(b[30]);
-  for (let i = 1; i < 30; ++i) {
-    for (let j = b[i]; j < b[i + 1]; ++j) {
-      r[j] = ((j - b[i]) << 5) | i;
-    }
-  }
-  return [b, r] as const;
-}
-
-const [fl, revfl] = freb(fleb, 2);
-// we can ignore the fact that the other numbers are wrong; they never happen anyway
-fl[28] = 258;
-revfl[258] = 28;
-const [fd, revfd] = freb(fdeb, 0);
-
-// map of value to reverse (assuming 16 bits)
-const rev = new u16(32768);
-for (let i = 0; i < 32768; ++i) {
-  // reverse table algorithm from UZIP.js
-  let x = i;
-  x = ((x & 0xaaaaaaaa) >>> 1) | ((x & 0x55555555) << 1);
-  x = ((x & 0xcccccccc) >>> 2) | ((x & 0x33333333) << 2);
-  x = ((x & 0xf0f0f0f0) >>> 4) | ((x & 0x0f0f0f0f) << 4);
-  x = ((x & 0xff00ff00) >>> 8) | ((x & 0x00ff00ff) << 8);
-  rev[i] = ((x >>> 16) | (x << 16)) >>> 17;
-}
-
-// create huffman tree from u8 "map": index -> code length for code index
-// mb (max bits) must be at most 15
-// TODO: optimize/split up?
-const hMap = ((cd: Uint8Array, mb: number, r: 0 | 1) => {
-  const s = cd.length;
-  // index
-  let i = 0;
-  // u8 "map": index -> # of codes with bit length = index
-  const l = new u8(mb);
-  // length of cd must be 288 (total # of codes)
-  for (; i < s; ++i) ++l[cd[i] - 1];
-  // u16 "map": index -> minimum code for bit length = index
-  const le = new u16(mb);
-  for (i = 0; i < mb; ++i) {
-    le[i] = (le[i - 1] + l[i - 1]) << 1;
-  }
-  let co: Uint16Array;
-  if (r) {
-    co = new u16(s);
-    for (i = 0; i < s; ++i) co[i] = rev[le[cd[i] - 1]++] >>> (15 - cd[i]);
-  } else {
-    // u16 "map": index -> number of actual bits, symbol for code
-    co = new u16(1 << mb);
-    // bits to remove for reverser
-    const rvb = 15 - mb;
-    for (i = 0; i < s; ++i) {
-      // ignore 0 lengths
-      if (cd[i]) {
-        // num encoding both symbol and bits read
-        const sv = (i << 4) | cd[i];
-        // free bits
-        const r = mb - cd[i];
-        // start value
-        let v = le[cd[i] - 1]++ << r;
-        // m is end value
-        for (const m = v | ((1 << r) - 1); v <= m; ++v) {
-          // every 16 bit value starting with the code yields the same result
-          co[rev[v] >>> rvb] = sv;
-        }
-      }
-    }
-  }
-  return co;
-});
-
-// fixed length tree
-const flt = new u8(286);
-for (let i = 0; i < 144; ++i) flt[i] = 8;
-for (let i = 144; i < 256; ++i) flt[i] = 9;
-for (let i = 256; i < 280; ++i) flt[i] = 7;
-for (let i = 280; i < 286; ++i) flt[i] = 8;
-// fixed distance tree
-const fdt = new u8(30);
-for (let i = 0; i < 30; ++i) fdt[i] = 5;
-// fixed length map
-const flm = hMap(flt, 9, 0), flnm = hMap(flt, 9, 1);
-// fixed distance map
-const fdm = hMap(fdt, 5, 0), fdnm = hMap(fdt, 5, 1);
-
-// find max of array
-const max = (a: Uint8Array | number[]) => {
-  let m = a[0];
-  for (let i = 0; i < a.length; ++i) {
-    if (a[i] > m) m = a[i];
-  }
-  return m;
-};
-
-// read d, starting at bit p continuing for l bits
-const bits = (d: Uint8Array, p: number, l: number) => {
-  const o = p >>> 3;
-  return ((d[o] | (d[o + 1] << 8)) >>> (p & 7)) & ((1 << l) - 1);
-}
-
-// read d, starting at bit p continuing for at least 16 bits
-const bits16 = (d: Uint8Array, p: number) => {
-  const o = p >>> 3;
-  return ((d[o] | (d[o + 1] << 8) | (d[o + 2] << 16) | (d[o + 3] << 24)) >>> (p & 7));
-}
-
-
-// expands raw DEFLATE data
-const inflate = (dat: Uint8Array, outSize?: number) => {
-  let buf = outSize && new u8(outSize);
-  // have to estimate size
-  const noBuf = !buf;
-  // Slightly less than 2x - assumes ~60% compression ratio
-  if (noBuf) buf = new u8((dat.length >>> 2) << 3);
-  // ensure buffer can fit at least l elements
-  const cbuf = (l: number) => {
-    let bl = buf.length;
-    // need to increase size to fit
-    if (l > bl) {
-      // Double or set to necessary, whichever is greater
-      const nbuf = new u8(Math.max(bl << 1, l));
-      nbuf.set(buf);
-      buf = nbuf;
-    }
-  }
-  //  last chunk     chunktype literal   dist       lengths    lmask   dmask
-  let final = 0, type = 0, hLit = 0, hDist = 0, hcLen = 0, ml = 0, md = 0;
-  //  bitpos   bytes
-  let pos = 0, bt = 0;
-  //  len                dist
-  let lm: Uint16Array, dm: Uint16Array;
-  while (!final) {
-    // BFINAL - this is only 1 when last chunk is next
-    final = bits(dat, pos, 1);
-    // type: 0 = no compression, 1 = fixed huffman, 2 = dynamic huffman
-    type = bits(dat, pos + 1, 2);
-    pos += 3;
-    if (!type) {
-      // go to end of byte boundary
-      if (pos & 7) pos += 8 - (pos & 7);
-      const s = (pos >>> 3) + 4, l = dat[s - 4] | (dat[s - 3] << 8);
-      // ensure size
-      if (noBuf) cbuf(bt + l);
-      // Copy over uncompressed data
-      buf.set(dat.subarray(s, s + l), bt);
-      // Get new bitpos, update byte count
-      pos = (s + l) << 3, bt += l;
-      continue;
-    }
-    // Make sure the buffer can hold this + the largest possible addition
-    // maximum chunk size (practically, theoretically infinite) is 2^17;
-    if (noBuf) cbuf(bt + 131072);
-    if (type == 1) {
-      lm = flm;
-      dm = fdm;
-      ml = 511;
-      md = 31;
-    }
-    else if (type == 2) {
-      hLit = bits(dat, pos, 5) + 257;
-      hDist = bits(dat, pos + 5, 5) + 1;
-      hcLen = bits(dat, pos + 10, 4) + 4;
-      pos += 14;
-      // length+distance tree
-      const ldt = new u8(hLit + hDist);
-      // code length tree
-      const clt = new u8(19);
-      for (let i = 0; i < hcLen; ++i) {
-        // use index map to get real code
-        clt[clim[i]] = bits(dat, pos + i * 3, 3);
-      }
-      pos += hcLen * 3;
-      // code lengths bits
-      const clb = max(clt);
-      // code lengths map
-      const clm = hMap(clt, clb, 0);
-      for (let i = 0; i < ldt.length;) {
-        const r = clm[bits(dat, pos, clb)];
-        // bits read
-        pos += r & 15;
-        // symbol
-        const s = r >>> 4;
-        // code length to copy
-        if (s < 16) {
-          ldt[i++] = s;
-        } else {
-          //  copy   count
-          let c = 0, n = 0;
-          if (s == 16) n = 3 + bits(dat, pos, 2), pos += 2, c = ldt[i - 1];
-          else if (s == 17) n = 3 + bits(dat, pos, 3), pos += 3;
-          else if (s == 18) n = 11 + bits(dat, pos, 7), pos += 7;
-          while (n--) ldt[i++] = c;
-        }
-      }
-      //    length tree                 distance tree
-      const lt = ldt.subarray(0, hLit), dt = ldt.subarray(hLit);
-      // max length bits
-      const mlb = max(lt)
-      // max dist bits
-      const mdb = max(dt);
-      ml = (1 << mlb) - 1;
-      lm = hMap(lt, mlb, 0);
-      md = (1 << mdb) - 1;
-      dm = hMap(dt, mdb, 0);
-    }
-    for (;;) {
-      // bits read, code
-      const c = lm[bits16(dat, pos) & ml];
-      pos += c & 15;
-      // code
-      const sym = c >>> 4;
-      if (sym < 256) buf[bt++] = sym;
-      else if (sym == 256) break;
-      else {
-        let end = bt + sym - 254;
-        // no extra bits needed if less
-        if (sym > 264) {
-          // index
-          const i = sym - 257;
-          end = bt + bits(dat, pos, fleb[i]) + fl[i];
-          pos += fleb[i];
-        }
-        // dist
-        const d = dm[bits16(dat, pos) & md];
-        pos += d & 15;
-        const dsym = d >>> 4;
-        let dt = fd[dsym];
-        if (dsym > 3) {
-          dt += bits16(dat, pos) & ((1 << fdeb[dsym]) - 1);
-          pos += fdeb[dsym];
-        }
-        if (noBuf) cbuf(bt + 131072);
-        while (bt < end) {
-          buf[bt] = buf[bt++ - dt];
-          buf[bt] = buf[bt++ - dt];
-          buf[bt] = buf[bt++ - dt];
-          buf[bt] = buf[bt++ - dt];
-        }
-        bt = end;
-      }
-    }
-  }
-  return buf.slice(0, bt);
-}
-
-// starting at p, write the minimum number of bits that can hold v to ds
-const wbits = (d: Uint8Array, p: number, v: number) => {
-  v <<= p & 7;
-  const o = p >>> 3;
-  d[o] |= v;
-  d[o + 1] |= v >>> 8;
-}
-
-// starting at p, write the minimum number of bits (>8) that can hold v to ds
-const wbits16 = (d: Uint8Array, p: number, v: number) => {
-  v <<= p & 7;
-  const o = p >>> 3;
-  d[o] |= v;
-  d[o + 1] |= v >>> 8;
-  d[o + 2] |= v >>> 16;
-}
-
-type HuffNode = {
-  // symbol
-  s: number;
-  // frequency
-  f: number;
-  // left child
-  l?: HuffNode;
-  // right child
-  r?: HuffNode;
-};
-
-// creates code lengths from a frequency table
-const hTree = (d: Uint16Array, mb: number) => {
-  // Need extra info to make a tree
-  const t: HuffNode[] = [];
-  for (let i = 0; i < d.length; ++i) {
-    if (d[i]) t.push({ s: i, f: d[i] });
-  }
-  const s = t.length;
-  const t2 = t.slice();
-  if (s == 0) return [new u8(0), 0] as const;
-  if (s == 1) {
-    const v = new u8(t[0].s + 1);
-    v[t[0].s] = 1;
-    return [v, 1] as const;
-  }
-  t.sort((a, b) => a.f - b.f);
-  // after i2 reaches last ind, will be stopped
-  // freq must be greater than largest possible number of symbols
-  t.push({ s: -1, f: 25001 });
-  let l = t[0], r = t[1], i0 = 0, i1 = 1, i2 = 2;
-  t[0] = { s: -1, f: l.f + r.f, l, r };
-  // efficient algorithm from UZIP.js
-  // i0 is lookbehind, i2 is lookahead - after processing two low-freq
-  // symbols that combined have high freq, will start processing i2 (high-freq,
-  // non-composite) symbols instead
-  // see https://reddit.com/r/photopea/comments/ikekht/uzipjs_questions/
-	while (i1 != s - 1) {
-    l = t[t[i0].f < t[i2].f ? i0++ : i2++];
-    r = t[i0 != i1 && t[i0].f < t[i2].f ? i0++ : i2++];
-    t[i1++] = { s: -1, f: l.f + r.f, l, r };
-  }
-  let maxSym = t2[0].s;
-  for (let i = 1; i < s; ++i) {
-    if (t2[i].s > maxSym) maxSym = t2[i].s;
-  }
-  // code lengths
-  const tr = new u16(maxSym + 1);
-  // max bits in tree
-  let mbt = ln(t[i1 - 1], tr, 0);
-  if (mbt > mb) {
-    // more algorithms from UZIP.js
-    // TODO: find out how this code works (debt)
-    //  ind    debt
-    let i = 0, dt = 0;
-    //    left            cost
-    const lft = mbt - mb, cst = 1 << lft;
-    t2.sort((a, b) => tr[b.s] - tr[a.s] || a.f - b.f);
-    for (; i < s; ++i) {
-      const i2 = t2[i].s;
-      if (tr[i2] > mb) {
-        dt += cst - (1 << (mbt - tr[i2]));
-        tr[i2] = mb;
-      } else break;
-    }
-    dt >>>= lft;
-    while (dt > 0) {
-      const i2 = t2[i].s;
-      if (tr[i2] < mb) dt -= 1 << (mb - tr[i2]++ - 1);
-      else ++i;
-    }
-    for (; i >= 0 && dt; --i) {
-      const i2 = t2[i].s;
-      if (tr[i2] == mb) {
-        --tr[i2];
-        ++dt;
-      }
-    }
-    mbt = mb;
-  }
-  return [new u8(tr), mbt] as const;
-}
-// get the max length and assign length codes
-const ln = (n: HuffNode, l: Uint16Array, d: number): number => {
-  return n.s == -1
-    ? Math.max(ln(n.l, l, d + 1), ln(n.r, l, d + 1))
-    : (l[n.s] = d);
-}
-
-// length codes generation
-const lc = (c: Uint8Array) => {
-  let s = c.length;
-  // Note that the semicolon was intentional
-  while (s && !c[--s]);
-  const cl = new u16(++s);
-  //  ind      num         streak
-  let cli = 0, cln = c[0], cls = 1;
-  const w = (v: number) => { cl[cli++] = v; }
-  for (let i = 1; i <= s; ++i) {
-    if (c[i] == cln && i != s)
-      ++cls;
-    else {
-      if (!cln && cls > 2) {
-        for (; cls > 138; cls -= 138) w(32754);
-        if (cls > 2) {
-          w(cls > 10 ? ((cls - 11) << 5) | 28690 : ((cls - 3) << 5) | 12305);
-          cls = 0;
-        }
-      } else if (cls > 3) {
-        w(cln), --cls;
-        for (; cls > 6; cls -= 6) w(8304);
-        if (cls > 2) w(((cls - 3) << 5) | 8208), cls = 0;
-      }
-      while (cls--) w(cln);
-      cls = 1;
-      cln = c[i];
-    }
-  }
-  return [cl.slice(0, cli), s] as const;
-}
-
-// calculate the length of output from tree, code lengths
-const clen = (cf: Uint16Array, cl: Uint8Array) => {
-  let l = 0;
-  for (let i = 0; i < cl.length; ++i) l += cf[i] * cl[i];
-  return l;
-}
-
-// writes a fixed block
-// returns the new bit pos
-const wfblk = (out: Uint8Array, pos: number, dat: Uint8Array) => {
-  // no need to write 00 as type: TypedArray defaults to 0
-  const s = dat.length;
-  const o = (pos + 2) >>> 3;
-  out[o + 1] = s & 255;
-  out[o + 2] = s >>> 8;
-  out[o + 3] = out[o + 1] ^ 255;
-  out[o + 4] = out[o + 2] ^ 255;
-  out.set(dat, o + 5);
-  return (o + 4 + s) << 3;
-}
-
-// writes a block
-const wblk = (dat: Uint8Array, out: Uint8Array, final: number, syms: Uint32Array, lf: Uint16Array, df: Uint16Array, eb: number, li: number, bs: number, bl: number, p: number) => {
-  wbits(out, p++, final);
-  ++lf[256];
-  const [dlt, mlb] = hTree(lf, 15);
-  const [ddt, mdb] = hTree(df, 15);
-  const [lclt, nlc] = lc(dlt);
-  const [lcdt, ndc] = lc(ddt);
-  const lcfreq = new u16(19);
-  for (let i = 0; i < lclt.length; ++i) lcfreq[lclt[i] & 31]++;
-  for (let i = 0; i < lcdt.length; ++i) lcfreq[lcdt[i] & 31]++;
-  const [lct, mlcb] = hTree(lcfreq, 7);
-  let nlcc = 19;
-  for (; nlcc > 4 && !lct[clim[nlcc - 1]]; --nlcc);
-  const flen = (bl + 5) << 3;
-  const ftlen = clen(lf, flt) + clen(df, fdt) + eb;
-  const dtlen = clen(lf, dlt) + clen(df, ddt) + eb + 14 + 3 * nlcc + clen(lcfreq, lct) + (2 * lcfreq[16] + 3 * lcfreq[17] + 7 * lcfreq[18]);
-  if (flen < ftlen && flen < dtlen) return wfblk(out, p, dat.subarray(bs, bs + bl));
-  let lm: Uint16Array, ll: Uint8Array, dm: Uint16Array, dl: Uint8Array;
-  wbits(out, p, 1 + (dtlen < ftlen as unknown as number)), p += 2;
-  if (dtlen < ftlen) {
-    lm = hMap(dlt, mlb, 1), ll = dlt, dm = hMap(ddt, mdb, 1), dl = ddt;
-    const llm = hMap(lct, mlcb, 1);
-    wbits(out, p, nlc - 257);
-    wbits(out, p + 5, ndc - 1);
-    wbits(out, p + 10, nlcc - 4);
-    p += 14;
-    for (let i = 0; i < nlcc; ++i) wbits(out, p + 3 * i, lct[clim[i]]);
-    p += 3 * nlcc;
-    const lcts = [lclt, lcdt];
-    for (let it = 0; it < 2; ++it) {
-      const clct = lcts[it];
-      for (let i = 0; i < clct.length; ++i) {
-        const len = clct[i] & 31;
-        wbits(out, p, llm[len]), p += lct[len];
-        if (len > 15) wbits(out, p, (clct[i] >>> 5) & 127), p += clct[i] >>> 12;
-      }
-    }
-  } else {
-    lm = flnm, ll = flt, dm = fdnm, dl = fdt;
-  }
-  for (let i = 0; i < li; ++i) {
-    if (syms[i] > 255) {
-      const len = (syms[i] >>> 18) & 31;
-      wbits16(out, p, lm[len + 257]), p += ll[len + 257];
-      if (len > 7) wbits(out, p, (syms[i] >>> 23) & 31), p += fleb[len];
-      const dst = syms[i] & 31;
-      wbits16(out, p, dm[dst]), p += dl[dst];
-      if (dst > 3) wbits16(out, p, (syms[i] >>> 5) & 8191), p += fdeb[dst];
-    } else {
-      wbits16(out, p, lm[syms[i]]), p += ll[syms[i]];
-    }
-  }
-  wbits16(out, p, lm[256]);
-  return p + ll[256];
-}
-
-// deflate options (nice << 13) | chain
-const deo = new u32([65540, 131080, 131088, 131104, 262176, 1048704, 1048832, 2114560, 2117632]);
-
-// compresses data into a raw DEFLATE buffer
-const deflate = (dat: Uint8Array, lvl: number, plvl: number, pre: number, post: number) => {
-  const s = dat.length;
-  const o = new u8(pre + s + 5 * Math.ceil(s / 17000) + post);
-  // writing to this writes to the output buffer
-  const w = o.subarray(pre, o.length - post);
-  if (!lvl || dat.length < 4) {
-    for (let i = 0, pos = 0; i < s; i += 65535) {
-      // end
-      const e = i + 65535;
-      if (e < s) {
-        // write full block
-        pos = wfblk(w, pos, dat.subarray(i, e));
-      } else {
-        // write final block
-        w[i] = 1;
-        wfblk(w, pos, dat.subarray(i, s));
-      }
-    }
-    return o;
-  }
-  const opt = deo[lvl - 1];
-  const n = opt >>> 13, c = opt & 8191;
-  const msk = (1 << plvl) - 1;
-  //    prev 2-byte val map    curr 2-byte val map
-  const prev = new u16(32768), head = new u16(msk + 1);
-  const hsh = (i: number) => (dat[i] | (dat[i + 1] << 8) | (dat[i + 2] << 16)) & msk;
-  // 24576 is an arbitrary number of maximum symbols per block
-  // 423 buffer for last block
-  const syms = new u32(25000);
-  // length/literal freq   distance freq
-  const lf = new u16(286), df = new u16(30);
-  //  l/lcnt  exbits  index  l/lind  waitdx  bitpos
-  let eb = 0, i = 0, li = 0, wi = 0, bs = 0, pos = 0;
-  for (; i < s; ++i) {
-    // hash value
-    const hv = hsh(i);
-    // index mod 32768
-    let imod = i & 32767;
-    // previous index with this value
-    let pimod = head[hv];
-    prev[imod] = pimod;
-    head[hv] = imod;
-    // We always should modify head and prev, but only add symbols if
-    // this data is not yet processed ("wait" for wait index)
-    if (wi <= i) {
-      // bytes remaining
-      const rem = s - i;
-      if (li > 24576 && rem > 423) {
-        pos = wblk(dat, w, 0, syms, lf, df, eb, li, bs, i - bs, pos);
-        li = eb = 0, bs = i;
-        for (let j = 0; j < 286; ++j) lf[j] = 0;
-        for (let j = 0; j < 30; ++j) df[j] = 0;
-      }
-      //  len    dist   chain
-      let l = 0, d = 0, ch = c, dif = (i - pimod) & 32767;
-      if (rem > 2 && hv == hsh(i - dif)) {
-        const maxn = Math.min(n, rem);
-        const maxd = Math.min(32767, i);
-        // max possible max length
-        // not capped at dif because decompressors implement "rolling" index population
-        const ml = Math.min(258, rem);
-        while (dif <= maxd && --ch && imod != pimod) {
-          if (dat[i + l] == dat[i + l - dif]) {
-            let nl = 0;
-            for (; nl < ml && dat[i + nl] == dat[i + nl - dif]; ++nl);
-            if (nl > l) {
-              l = nl;
-              d = dif;
-              // break out early when we reach "nice" (we are satisfied enough)
-              if (nl >= maxn) break;
-              // now, find the rarest 2-byte sequence within this
-              // length of literals and search for that instead.
-              // Much faster than just using the start
-              const mmd = Math.min(dif, nl - 2);
-              let md = 0;
-              for (let j = 0; j < mmd; ++j) {
-                const ti = (i - dif + j + 32768) & 32767;
-                const pti = prev[ti];
-                const cd = (ti - pti + 32768) & 32767;
-                if (cd > md) md = cd, pimod = ti;
-              }
-            }
-          }
-          // check the previous match
-          imod = pimod, pimod = prev[imod];
-          dif += (imod - pimod + 32768) & 32767;
-        }
-      }
-      // l will be nonzero only when a match was found
-      if (l) {
-        // store both dist and len data in one Uint32
-        // Make sure this is recognized as a len/dist with 28th bit (2^28)
-        syms[li++] = 268435456 | (revfl[l] << 18) | revfd[d];
-        const lin = revfl[l] & 31, din = revfd[d] & 31;
-        eb += fleb[lin] + fdeb[din];
-        ++lf[257 + lin];
-        ++df[din];
-        wi = i + l;
-      } else {
-        syms[li++] = dat[i];
-        ++lf[dat[i]];
-      }
-    }
-  }
-  if (bs != i) pos = wblk(dat, w, 1, syms, lf, df, eb, li, bs, i - bs, pos);
-  return o.slice(0, (pos >>> 3) + 1 + post);
-}
-
-
-export { inflate, deflate };

+ 779 - 1
src/index.ts

@@ -1 +1,779 @@
-export * from './flate';
+// DEFLATE is a complex format; to read this code, you should probably check the RFC first:
+// https://tools.ietf.org/html/rfc1951
+
+// Much of the following code is similar to that of UZIP.js:
+// https://github.com/photopea/UZIP.js
+// Many optimizations have been made, so the bundle size is ultimately smaller but performance is similar.
+
+// Sometimes 0 will appear where -1 would be more appropriate. This is because using a uint
+// is better for memory in most engines (I *think*).
+
+// aliases for shorter compressed code (most minifers don't do this)
+const u8 = Uint8Array, u16 = Uint16Array, u32 = Uint32Array;
+
+// fixed length extra bits
+const fleb = new u8([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0, /* unused */ 0, 0, /* impossible */ 0]);
+
+// fixed distance extra bits
+// see fleb note
+const fdeb = new u8([0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, /* unused */ 0, 0]);
+
+// code length index map
+const clim = new u8([16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15]);
+
+// get base, reverse index map from extra bits
+const freb = (eb: Uint8Array, start: number) => {
+  const b = new u16(31);
+  for (let i = 0; i < 31; ++i) {
+    b[i] = start += 1 << eb[i - 1];
+  }
+  // numbers here are at max 18 bits
+  const r = new u32(b[30]);
+  for (let i = 1; i < 30; ++i) {
+    for (let j = b[i]; j < b[i + 1]; ++j) {
+      r[j] = ((j - b[i]) << 5) | i;
+    }
+  }
+  return [b, r] as const;
+}
+
+const [fl, revfl] = freb(fleb, 2);
+// we can ignore the fact that the other numbers are wrong; they never happen anyway
+fl[28] = 258;
+revfl[258] = 28;
+const [fd, revfd] = freb(fdeb, 0);
+
+// map of value to reverse (assuming 16 bits)
+const rev = new u16(32768);
+for (let i = 0; i < 32768; ++i) {
+  // reverse table algorithm from UZIP.js
+  let x = i;
+  x = ((x & 0xAAAAAAAA) >>> 1) | ((x & 0x55555555) << 1);
+  x = ((x & 0xCCCCCCCC) >>> 2) | ((x & 0x33333333) << 2);
+  x = ((x & 0xF0F0F0F0) >>> 4) | ((x & 0x0F0F0F0F) << 4);
+  rev[i] = (((x & 0xFF00FF00) >>> 8) | ((x & 0x00FF00FF) << 8)) >>> 1;
+}
+
+// create huffman tree from u8 "map": index -> code length for code index
+// mb (max bits) must be at most 15
+// TODO: optimize/split up?
+const hMap = ((cd: Uint8Array, mb: number, r: 0 | 1) => {
+  const s = cd.length;
+  // index
+  let i = 0;
+  // u8 "map": index -> # of codes with bit length = index
+  const l = new u8(mb);
+  // length of cd must be 288 (total # of codes)
+  for (; i < s; ++i) ++l[cd[i] - 1];
+  // u16 "map": index -> minimum code for bit length = index
+  const le = new u16(mb);
+  for (i = 0; i < mb; ++i) {
+    le[i] = (le[i - 1] + l[i - 1]) << 1;
+  }
+  let co: Uint16Array;
+  if (r) {
+    co = new u16(s);
+    for (i = 0; i < s; ++i) co[i] = rev[le[cd[i] - 1]++] >>> (15 - cd[i]);
+  } else {
+    // u16 "map": index -> number of actual bits, symbol for code
+    co = new u16(1 << mb);
+    // bits to remove for reverser
+    const rvb = 15 - mb;
+    for (i = 0; i < s; ++i) {
+      // ignore 0 lengths
+      if (cd[i]) {
+        // num encoding both symbol and bits read
+        const sv = (i << 4) | cd[i];
+        // free bits
+        const r = mb - cd[i];
+        // start value
+        let v = le[cd[i] - 1]++ << r;
+        // m is end value
+        for (const m = v | ((1 << r) - 1); v <= m; ++v) {
+          // every 16 bit value starting with the code yields the same result
+          co[rev[v] >>> rvb] = sv;
+        }
+      }
+    }
+  }
+  return co;
+});
+
+// fixed length tree
+const flt = new u8(286);
+for (let i = 0; i < 144; ++i) flt[i] = 8;
+for (let i = 144; i < 256; ++i) flt[i] = 9;
+for (let i = 256; i < 280; ++i) flt[i] = 7;
+for (let i = 280; i < 286; ++i) flt[i] = 8;
+// fixed distance tree
+const fdt = new u8(30);
+for (let i = 0; i < 30; ++i) fdt[i] = 5;
+// fixed length map
+const flm = hMap(flt, 9, 0), flnm = hMap(flt, 9, 1);
+// fixed distance map
+const fdm = hMap(fdt, 5, 0), fdnm = hMap(fdt, 5, 1);
+
+// find max of array
+const max = (a: Uint8Array | number[]) => {
+  let m = a[0];
+  for (let i = 0; i < a.length; ++i) {
+    if (a[i] > m) m = a[i];
+  }
+  return m;
+};
+
+// read d, starting at bit p continuing for l bits
+const bits = (d: Uint8Array, p: number, l: number) => {
+  const o = p >>> 3;
+  return ((d[o] | (d[o + 1] << 8)) >>> (p & 7)) & ((1 << l) - 1);
+}
+
+// read d, starting at bit p continuing for at least 16 bits
+const bits16 = (d: Uint8Array, p: number) => {
+  const o = p >>> 3;
+  return ((d[o] | (d[o + 1] << 8) | (d[o + 2] << 16) | (d[o + 3] << 24)) >>> (p & 7));
+}
+
+
+// expands raw DEFLATE data
+const inflt = (dat: Uint8Array, buf?: Uint8Array) => {
+  // have to estimate size
+  const noBuf = !buf;
+  // Slightly less than 2x - assumes ~60% compression ratio
+  if (noBuf) buf = new u8((dat.length >>> 2) << 3);
+  // ensure buffer can fit at least l elements
+  const cbuf = (l: number) => {
+    let bl = buf.length;
+    // need to increase size to fit
+    if (l > bl) {
+      // Double or set to necessary, whichever is greater
+      const nbuf = new u8(Math.max(bl << 1, l));
+      nbuf.set(buf);
+      buf = nbuf;
+    }
+  }
+  //  last chunk     chunktype literal   dist       lengths    lmask   dmask
+  let final = 0, type = 0, hLit = 0, hDist = 0, hcLen = 0, ml = 0, md = 0;
+  //  bitpos   bytes
+  let pos = 0, bt = 0;
+  //  len                dist
+  let lm: Uint16Array, dm: Uint16Array;
+  while (!final) {
+    // BFINAL - this is only 1 when last chunk is next
+    final = bits(dat, pos, 1);
+    // type: 0 = no compression, 1 = fixed huffman, 2 = dynamic huffman
+    type = bits(dat, pos + 1, 2);
+    pos += 3;
+    if (!type) {
+      // go to end of byte boundary
+      if (pos & 7) pos += 8 - (pos & 7);
+      const s = (pos >>> 3) + 4, l = dat[s - 4] | (dat[s - 3] << 8);
+      // ensure size
+      if (noBuf) cbuf(bt + l);
+      // Copy over uncompressed data
+      buf.set(dat.subarray(s, s + l), bt);
+      // Get new bitpos, update byte count
+      pos = (s + l) << 3, bt += l;
+      continue;
+    }
+    // Make sure the buffer can hold this + the largest possible addition
+    // maximum chunk size (practically, theoretically infinite) is 2^17;
+    if (noBuf) cbuf(bt + 131072);
+    if (type == 1) {
+      lm = flm;
+      dm = fdm;
+      ml = 511;
+      md = 31;
+    }
+    else if (type == 2) {
+      hLit = bits(dat, pos, 5) + 257;
+      hDist = bits(dat, pos + 5, 5) + 1;
+      hcLen = bits(dat, pos + 10, 4) + 4;
+      pos += 14;
+      // length+distance tree
+      const ldt = new u8(hLit + hDist);
+      // code length tree
+      const clt = new u8(19);
+      for (let i = 0; i < hcLen; ++i) {
+        // use index map to get real code
+        clt[clim[i]] = bits(dat, pos + i * 3, 3);
+      }
+      pos += hcLen * 3;
+      // code lengths bits
+      const clb = max(clt);
+      // code lengths map
+      const clm = hMap(clt, clb, 0);
+      for (let i = 0; i < ldt.length;) {
+        const r = clm[bits(dat, pos, clb)];
+        // bits read
+        pos += r & 15;
+        // symbol
+        const s = r >>> 4;
+        // code length to copy
+        if (s < 16) {
+          ldt[i++] = s;
+        } else {
+          //  copy   count
+          let c = 0, n = 0;
+          if (s == 16) n = 3 + bits(dat, pos, 2), pos += 2, c = ldt[i - 1];
+          else if (s == 17) n = 3 + bits(dat, pos, 3), pos += 3;
+          else if (s == 18) n = 11 + bits(dat, pos, 7), pos += 7;
+          while (n--) ldt[i++] = c;
+        }
+      }
+      //    length tree                 distance tree
+      const lt = ldt.subarray(0, hLit), dt = ldt.subarray(hLit);
+      // max length bits
+      const mlb = max(lt)
+      // max dist bits
+      const mdb = max(dt);
+      ml = (1 << mlb) - 1;
+      lm = hMap(lt, mlb, 0);
+      md = (1 << mdb) - 1;
+      dm = hMap(dt, mdb, 0);
+    }
+    for (;;) {
+      // bits read, code
+      const c = lm[bits16(dat, pos) & ml];
+      pos += c & 15;
+      // code
+      const sym = c >>> 4;
+      if (sym < 256) buf[bt++] = sym;
+      else if (sym == 256) break;
+      else {
+        let end = bt + sym - 254;
+        // no extra bits needed if less
+        if (sym > 264) {
+          // index
+          const i = sym - 257;
+          end = bt + bits(dat, pos, fleb[i]) + fl[i];
+          pos += fleb[i];
+        }
+        // dist
+        const d = dm[bits16(dat, pos) & md];
+        pos += d & 15;
+        const dsym = d >>> 4;
+        let dt = fd[dsym];
+        if (dsym > 3) {
+          dt += bits16(dat, pos) & ((1 << fdeb[dsym]) - 1);
+          pos += fdeb[dsym];
+        }
+        if (noBuf) cbuf(bt + 131072);
+        while (bt < end) {
+          buf[bt] = buf[bt++ - dt];
+          buf[bt] = buf[bt++ - dt];
+          buf[bt] = buf[bt++ - dt];
+          buf[bt] = buf[bt++ - dt];
+        }
+        bt = end;
+      }
+    }
+  }
+  return buf.slice(0, bt);
+}
+
+// starting at p, write the minimum number of bits that can hold v to ds
+const wbits = (d: Uint8Array, p: number, v: number) => {
+  v <<= p & 7;
+  const o = p >>> 3;
+  d[o] |= v;
+  d[o + 1] |= v >>> 8;
+}
+
+// starting at p, write the minimum number of bits (>8) that can hold v to ds
+const wbits16 = (d: Uint8Array, p: number, v: number) => {
+  v <<= p & 7;
+  const o = p >>> 3;
+  d[o] |= v;
+  d[o + 1] |= v >>> 8;
+  d[o + 2] |= v >>> 16;
+}
+
+type HuffNode = {
+  // symbol
+  s: number;
+  // frequency
+  f: number;
+  // left child
+  l?: HuffNode;
+  // right child
+  r?: HuffNode;
+};
+
+// creates code lengths from a frequency table
+const hTree = (d: Uint16Array, mb: number) => {
+  // Need extra info to make a tree
+  const t: HuffNode[] = [];
+  for (let i = 0; i < d.length; ++i) {
+    if (d[i]) t.push({ s: i, f: d[i] });
+  }
+  const s = t.length;
+  const t2 = t.slice();
+  if (s == 0) return [new u8(0), 0] as const;
+  if (s == 1) {
+    const v = new u8(t[0].s + 1);
+    v[t[0].s] = 1;
+    return [v, 1] as const;
+  }
+  t.sort((a, b) => a.f - b.f);
+  // after i2 reaches last ind, will be stopped
+  // freq must be greater than largest possible number of symbols
+  t.push({ s: -1, f: 25001 });
+  let l = t[0], r = t[1], i0 = 0, i1 = 1, i2 = 2;
+  t[0] = { s: -1, f: l.f + r.f, l, r };
+  // efficient algorithm from UZIP.js
+  // i0 is lookbehind, i2 is lookahead - after processing two low-freq
+  // symbols that combined have high freq, will start processing i2 (high-freq,
+  // non-composite) symbols instead
+  // see https://reddit.com/r/photopea/comments/ikekht/uzipjs_questions/
+	while (i1 != s - 1) {
+    l = t[t[i0].f < t[i2].f ? i0++ : i2++];
+    r = t[i0 != i1 && t[i0].f < t[i2].f ? i0++ : i2++];
+    t[i1++] = { s: -1, f: l.f + r.f, l, r };
+  }
+  let maxSym = t2[0].s;
+  for (let i = 1; i < s; ++i) {
+    if (t2[i].s > maxSym) maxSym = t2[i].s;
+  }
+  // code lengths
+  const tr = new u16(maxSym + 1);
+  // max bits in tree
+  let mbt = ln(t[i1 - 1], tr, 0);
+  if (mbt > mb) {
+    // more algorithms from UZIP.js
+    // TODO: find out how this code works (debt)
+    //  ind    debt
+    let i = 0, dt = 0;
+    //    left            cost
+    const lft = mbt - mb, cst = 1 << lft;
+    t2.sort((a, b) => tr[b.s] - tr[a.s] || a.f - b.f);
+    for (; i < s; ++i) {
+      const i2 = t2[i].s;
+      if (tr[i2] > mb) {
+        dt += cst - (1 << (mbt - tr[i2]));
+        tr[i2] = mb;
+      } else break;
+    }
+    dt >>>= lft;
+    while (dt > 0) {
+      const i2 = t2[i].s;
+      if (tr[i2] < mb) dt -= 1 << (mb - tr[i2]++ - 1);
+      else ++i;
+    }
+    for (; i >= 0 && dt; --i) {
+      const i2 = t2[i].s;
+      if (tr[i2] == mb) {
+        --tr[i2];
+        ++dt;
+      }
+    }
+    mbt = mb;
+  }
+  return [new u8(tr), mbt] as const;
+}
+// get the max length and assign length codes
+const ln = (n: HuffNode, l: Uint16Array, d: number): number => {
+  return n.s == -1
+    ? Math.max(ln(n.l, l, d + 1), ln(n.r, l, d + 1))
+    : (l[n.s] = d);
+}
+
+// length codes generation
+const lc = (c: Uint8Array) => {
+  let s = c.length;
+  // Note that the semicolon was intentional
+  while (s && !c[--s]);
+  const cl = new u16(++s);
+  //  ind      num         streak
+  let cli = 0, cln = c[0], cls = 1;
+  const w = (v: number) => { cl[cli++] = v; }
+  for (let i = 1; i <= s; ++i) {
+    if (c[i] == cln && i != s)
+      ++cls;
+    else {
+      if (!cln && cls > 2) {
+        for (; cls > 138; cls -= 138) w(32754);
+        if (cls > 2) {
+          w(cls > 10 ? ((cls - 11) << 5) | 28690 : ((cls - 3) << 5) | 12305);
+          cls = 0;
+        }
+      } else if (cls > 3) {
+        w(cln), --cls;
+        for (; cls > 6; cls -= 6) w(8304);
+        if (cls > 2) w(((cls - 3) << 5) | 8208), cls = 0;
+      }
+      while (cls--) w(cln);
+      cls = 1;
+      cln = c[i];
+    }
+  }
+  return [cl.slice(0, cli), s] as const;
+}
+
+// calculate the length of output from tree, code lengths
+const clen = (cf: Uint16Array, cl: Uint8Array) => {
+  let l = 0;
+  for (let i = 0; i < cl.length; ++i) l += cf[i] * cl[i];
+  return l;
+}
+
+// writes a fixed block
+// returns the new bit pos
+const wfblk = (out: Uint8Array, pos: number, dat: Uint8Array) => {
+  // no need to write 00 as type: TypedArray defaults to 0
+  const s = dat.length;
+  const o = (pos + 2) >>> 3;
+  out[o + 1] = s & 255;
+  out[o + 2] = s >>> 8;
+  out[o + 3] = out[o + 1] ^ 255;
+  out[o + 4] = out[o + 2] ^ 255;
+  out.set(dat, o + 5);
+  return (o + 4 + s) << 3;
+}
+
+// writes a block
+const wblk = (dat: Uint8Array, out: Uint8Array, final: number, syms: Uint32Array, lf: Uint16Array, df: Uint16Array, eb: number, li: number, bs: number, bl: number, p: number) => {
+  wbits(out, p++, final);
+  ++lf[256];
+  const [dlt, mlb] = hTree(lf, 15);
+  const [ddt, mdb] = hTree(df, 15);
+  const [lclt, nlc] = lc(dlt);
+  const [lcdt, ndc] = lc(ddt);
+  const lcfreq = new u16(19);
+  for (let i = 0; i < lclt.length; ++i) lcfreq[lclt[i] & 31]++;
+  for (let i = 0; i < lcdt.length; ++i) lcfreq[lcdt[i] & 31]++;
+  const [lct, mlcb] = hTree(lcfreq, 7);
+  let nlcc = 19;
+  for (; nlcc > 4 && !lct[clim[nlcc - 1]]; --nlcc);
+  const flen = (bl + 5) << 3;
+  const ftlen = clen(lf, flt) + clen(df, fdt) + eb;
+  const dtlen = clen(lf, dlt) + clen(df, ddt) + eb + 14 + 3 * nlcc + clen(lcfreq, lct) + (2 * lcfreq[16] + 3 * lcfreq[17] + 7 * lcfreq[18]);
+  if (flen < ftlen && flen < dtlen) return wfblk(out, p, dat.subarray(bs, bs + bl));
+  let lm: Uint16Array, ll: Uint8Array, dm: Uint16Array, dl: Uint8Array;
+  wbits(out, p, 1 + (dtlen < ftlen as unknown as number)), p += 2;
+  if (dtlen < ftlen) {
+    lm = hMap(dlt, mlb, 1), ll = dlt, dm = hMap(ddt, mdb, 1), dl = ddt;
+    const llm = hMap(lct, mlcb, 1);
+    wbits(out, p, nlc - 257);
+    wbits(out, p + 5, ndc - 1);
+    wbits(out, p + 10, nlcc - 4);
+    p += 14;
+    for (let i = 0; i < nlcc; ++i) wbits(out, p + 3 * i, lct[clim[i]]);
+    p += 3 * nlcc;
+    const lcts = [lclt, lcdt];
+    for (let it = 0; it < 2; ++it) {
+      const clct = lcts[it];
+      for (let i = 0; i < clct.length; ++i) {
+        const len = clct[i] & 31;
+        wbits(out, p, llm[len]), p += lct[len];
+        if (len > 15) wbits(out, p, (clct[i] >>> 5) & 127), p += clct[i] >>> 12;
+      }
+    }
+  } else {
+    lm = flnm, ll = flt, dm = fdnm, dl = fdt;
+  }
+  for (let i = 0; i < li; ++i) {
+    if (syms[i] > 255) {
+      const len = (syms[i] >>> 18) & 31;
+      wbits16(out, p, lm[len + 257]), p += ll[len + 257];
+      if (len > 7) wbits(out, p, (syms[i] >>> 23) & 31), p += fleb[len];
+      const dst = syms[i] & 31;
+      wbits16(out, p, dm[dst]), p += dl[dst];
+      if (dst > 3) wbits16(out, p, (syms[i] >>> 5) & 8191), p += fdeb[dst];
+    } else {
+      wbits16(out, p, lm[syms[i]]), p += ll[syms[i]];
+    }
+  }
+  wbits16(out, p, lm[256]);
+  return p + ll[256];
+}
+
+// deflate options (nice << 13) | chain
+const deo = new u32([65540, 131080, 131088, 131104, 262176, 1048704, 1048832, 2114560, 2117632]);
+
+// compresses data into a raw DEFLATE buffer
+const dflt = (dat: Uint8Array, lvl: number, plvl: number, pre: number, post: number) => {
+  const s = dat.length;
+  const o = new u8(pre + s + 5 * Math.ceil(s / 7000) + post);
+  // writing to this writes to the output buffer
+  const w = o.subarray(pre, o.length - post);
+  let pos = 0;
+  if (!lvl || dat.length < 4) {
+    for (let i = 0; i < s; i += 65535) {
+      // end
+      const e = i + 65535;
+      if (e < s) {
+        // write full block
+        pos = wfblk(w, pos, dat.subarray(i, e));
+      } else {
+        // write final block
+        w[i] = 1;
+        pos = wfblk(w, pos, dat.subarray(i, s));
+      }
+    }
+  } else {
+    const opt = deo[lvl - 1];
+    const n = opt >>> 13, c = opt & 8191;
+    const msk = (1 << plvl) - 1;
+    //    prev 2-byte val map    curr 2-byte val map
+    const prev = new u16(32768), head = new u16(msk + 1);
+    const bs1 = Math.ceil(plvl / 3), bs2 = 2 * bs1;
+    const hsh = (i: number) => (dat[i] ^ (dat[i + 1] << bs1) ^ (dat[i + 2] << bs2)) & msk;
+    // 24576 is an arbitrary number of maximum symbols per block
+    // 424 buffer for last block
+    const syms = new u32(25000);
+    // length/literal freq   distance freq
+    const lf = new u16(286), df = new u16(30);
+    //  l/lcnt  exbits  index  l/lind  waitdx  bitpos
+    let lc = 0, eb = 0, i = 0, li = 0, wi = 0, bs = 0;
+    for (; i < s; ++i) {
+      // hash value
+      const hv = hsh(i);
+      // index mod 32768
+      let imod = i & 32767;
+      // previous index with this value
+      let pimod = head[hv];
+      prev[imod] = pimod;
+      head[hv] = imod;
+      // We always should modify head and prev, but only add symbols if
+      // this data is not yet processed ("wait" for wait index)
+      if (wi <= i) {
+        // bytes remaining
+        const rem = s - i;
+        if ((lc > 7000 || li > 24576) && rem > 423) {
+          pos = wblk(dat, w, 0, syms, lf, df, eb, li, bs, i - bs, pos);
+          li = lc = eb = 0, bs = i;
+          for (let j = 0; j < 286; ++j) lf[j] = 0;
+          for (let j = 0; j < 30; ++j) df[j] = 0;
+        }
+        //  len    dist   chain
+        let l = 2, d = 0, ch = c, dif = (imod - pimod) & 32767;
+        if (rem > 2 && hv == hsh(i - dif)) {
+          const maxn = Math.min(n, rem);
+          const maxd = Math.min(32767, i);
+          // max possible length
+          // not capped at dif because decompressors implement "rolling" index population
+          const ml = Math.min(258, rem);
+          while (dif <= maxd && --ch && imod != pimod) {
+            if (dat[i + l] == dat[i + l - dif]) {
+              let nl = 0;
+              for (; nl < ml && dat[i + nl] == dat[i + nl - dif]; ++nl);
+              if (nl > l) {
+                l = nl, d = dif;
+                // break out early when we reach "nice" (we are satisfied enough)
+                if (nl >= maxn) break;
+                // now, find the rarest 2-byte sequence within this
+                // length of literals and search for that instead.
+                // Much faster than just using the start
+                const mmd = Math.min(dif, nl - 2);
+                let md = 0;
+                for (let j = 0; j < mmd; ++j) {
+                  const ti = (i - dif + j + 32768) & 32767;
+                  const pti = prev[ti];
+                  const cd = (ti - pti + 32768) & 32767;
+                  if (cd > md) md = cd, pimod = ti;
+                }
+              }
+            }
+            // check the previous match
+            imod = pimod, pimod = prev[imod];
+            dif += (imod - pimod + 32768) & 32767;
+          }
+        }
+        // d will be nonzero only when a match was found
+        if (d) {
+          // store both dist and len data in one Uint32
+          // Make sure this is recognized as a len/dist with 28th bit (2^28)
+          syms[li++] = 268435456 | (revfl[l] << 18) | revfd[d];
+          const lin = revfl[l] & 31, din = revfd[d] & 31;
+          eb += fleb[lin] + fdeb[din];
+          ++lf[257 + lin];
+          ++df[din];
+          wi = i + l;
+          ++lc;
+        } else {
+          syms[li++] = dat[i];
+          ++lf[dat[i]];
+        }
+      }
+    }
+    if (bs != i) pos = wblk(dat, w, 1, syms, lf, df, eb, li, bs, i - bs, pos);
+  }
+  return o.slice(0, pre + (pos >>> 3) + 1 + post);
+}
+
+
+// CRC32 table
+const crct = new u32(256);
+for (let i = 0; i < 256; ++i) {
+  let c = i, k = 9;
+  while (--k) c = ((c & 1) && 0xEDB88320) ^ (c >>> 1);
+  crct[i] = c;
+}
+
+/**
+ * Options for compressing data into a DEFLATE format
+ */
+export interface DeflateOptions {
+  /**
+   * The level of compression to use, ranging from 0-9.
+   * 
+   * 0 will store the data without compression.
+   * 1 is fastest but compresses the worst, 9 is slowest but compresses the best.
+   * The default level is 6.
+   * 
+   * Typically, binary data benefits much more from higher values than text data.
+   * In both cases, higher values usually take disproportionately longer than the reduction in final size that results.
+   * 
+   * For example, a 1 MB text file could:
+   * - become 1.01 MB with level 0 in 1ms
+   * - become 400 kB with level 1 in 10ms
+   * - become 320 kB with level 9 in 100ms
+   */
+  level?: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9;
+  /**
+   * The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
+   * 
+   * Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
+   * It is recommended not to lower the value below 4, since that tends to hurt performance.
+   * 
+   * The default value is automatically determined based on the size of the input data.
+   */
+  mem?: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12;
+};
+
+/**
+ * Options for compressing data into a GZIP format
+ */
+export interface GZIPOptions extends DeflateOptions {
+  /**
+   * When the file was last modified. Defaults to the current time.
+   * Set this to 0 to avoid specifying a modification date entirely.
+   */
+  mtime?: Date | string | number;
+  /**
+   * The filename of the data. If the `gunzip` command is used to decompress the data, it will output a file
+   * with this name instead of the name of the compressed file.
+   */
+  filename?: string;
+}
+
+/**
+ * Options for compressing data into a Zlib format
+ */
+export interface ZlibOptions extends DeflateOptions {}
+
+// deflate with opts
+const dopt = (dat: Uint8Array, opt: DeflateOptions, pre: number, post: number) =>
+  dflt(dat, opt.level || 6, 12 + (opt.mem || 4), pre, post);
+
+// write bytes
+const wbytes = (d: Uint8Array, b: number, v: number) => {
+  for (let i = b; v; ++i) d[i] = v, v >>>= 8;
+}
+
+/**
+ * Compresses data with DEFLATE without any wrapper
+ * @param data The data to compress
+ * @param opts The compression options
+ * @returns The deflated version of the data
+ */
+export function deflate(data: Uint8Array, opts: DeflateOptions = {}) {
+  return dopt(data, opts, 0, 0);
+}
+
+/**
+ * Expands DEFLATE data with no wrapper
+ * @param data The data to decompress
+ * @param out Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length.
+ * @returns The decompressed version of the data
+ */
+export function inflate(data: Uint8Array, out?: Uint8Array) {
+  return inflt(data, out);
+}
+
+/**
+ * Compresses data with GZIP
+ * @param data The data to compress
+ * @param opts The compression options
+ * @returns The gzipped version of the data
+ */
+export function gzip(data: Uint8Array, opts: GZIPOptions = {}) {
+  const fn = opts.filename;
+  const l = data.length, raw = dopt(data, opts, 10 + ((fn && fn.length + 1) || 0), 8), s = raw.length;
+  raw[0] = 31, raw[1] = 139, raw[2] = 8, raw[8] = opts.level == 0 ? 4 : opts.level == 9 ? 2 : 3, raw[9] = 255;
+  let mt = Math.floor((new Date(opts.mtime as (string | number) || Date.now()) as unknown as number) / 1000);
+  wbytes(raw, 4, mt);
+  if (fn) for (let i = 0; i <= fn.length; ++i) raw[i + 10] = fn.charCodeAt(i);
+  // CRC32
+  let crc = 0xFFFFFFFF;
+  for (let i = 0; i < l; ++i) crc = crct[(crc & 255) ^ data[i]] ^ (crc >>> 8);
+  wbytes(raw, s - 8, crc ^ 0xFFFFFFFF), wbytes(raw, s - 4, l);
+  return raw;
+}
+
+/**
+ * Expands GZIP data
+ * @param data The data to decompress
+ * @param out Where to write the data. GZIP already encodes the output size, so providing this doesn't save memory.
+ * @returns The decompressed version of the data
+ */
+export function gunzip(data: Uint8Array, out?: Uint8Array) {
+  const l = data.length;
+  if (l < 18 || data[0] != 31 || data[1] != 139 || data[2] != 8) throw new Error('invalid gzip data');
+  const flg = data[3];
+  let st = 10 + (flg & 2);
+  if (flg & 4) st += data[10] | (data[11] << 8) + 2;
+  for (let zs = (flg >> 3 & 1) + (flg >> 4 & 1); zs > 0; zs -= (data[st++] == 0) as unknown as number);
+  if (!out) out = new Uint8Array(data[l - 4] | data[l - 3] << 8 | data[l - 2] << 16 | data[l - 1] << 24);
+  return inflt(data.subarray(st, -8), out);
+}
+
+/**
+ * Compress data with Zlib
+ * @param data The data to compress
+ * @param opts The compression options
+ * @returns The zlib-compressed version of the data
+ */
+export function zlib(data: Uint8Array, opts: ZlibOptions) {
+  const l = data.length, raw = dopt(data, opts, 2, 4), s = raw.length;
+  const lv = opts.level, fl = lv == 0 ? 0 : lv < 6 ? 1 : lv == 9 ? 3 : 2;
+  raw[0] = 120, raw[1] = (fl << 6) | (fl ? (32 - 2 * fl) : 1);
+  // Adler32
+  let a = 1, b = 0;
+  for (let i = 0; i != l;) {
+    const e = Math.min(i + 5552, l);
+    for (; i < e; ++i) a += data[i], b += a;
+    a %= 65521, b %= 65521;
+  }
+  raw[s - 4] = b >>> 8, raw[s - 3] = b & 255, raw[s - 2] = a >>> 8, raw[s - 1] = a & 255;
+  return raw;
+}
+
+/**
+ * Expands Zlib data
+ * @param data The data to decompress
+ * @param out Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length.
+ * @returns The decompressed version of the data
+ */
+export function unzlib(data: Uint8Array, out?: Uint8Array) {
+  const l = data.length;
+  if (l < 6 || (data[0] & 15) != 8 || (data[0] >>> 4) > 7) throw new Error('invalid zlib data');
+  if (data[1] & 32) throw new Error('invalid zlib data: dictionaries not supported');
+  return inflt(data.subarray(2, -4), out);
+}
+
+// Default algorithm for compression (used because having a known output size allows faster decompression)
+export { gzip as compress };
+
+/**
+ * Expands compressed GZIP, Zlib, or raw DEFLATE data, automatically detecting the format
+ * @param data The data to decompress
+ * @param out Where to write the data. Saves memory if you know the decompressed size and provide an output buffer of that length.
+ * @returns The decompressed version of the data
+ */
+export function decompress(data: Uint8Array, out?: Uint8Array) {
+  if (data[0] == 31 && data[1] == 139 && data[2] == 8) return gunzip(data, out);
+  if ((data[0] & 15) != 8 || (data[0] >> 4) > 7) return inflate(data, out);
+  return unzlib(data, out);
+}

+ 248 - 0
yarn.lock

@@ -2,17 +2,265 @@
 # yarn lockfile v1
 
 
+"@types/[email protected]":
+  version "3.0.3"
+  resolved "https://registry.yarnpkg.com/@types/minimatch/-/minimatch-3.0.3.tgz#3dca0e3f33b200fc7d1139c0cd96c1268cadfd9d"
+  integrity sha512-tHq6qdbT9U1IRSGf14CL0pUlULksvY9OZ+5eEgl1N7t+OA3tGvNpxJCzuKQlsNgCVwbAs670L1vcVQi8j9HjnA==
+
+backbone@^1.4.0:
+  version "1.4.0"
+  resolved "https://registry.yarnpkg.com/backbone/-/backbone-1.4.0.tgz#54db4de9df7c3811c3f032f34749a4cd27f3bd12"
+  integrity sha512-RLmDrRXkVdouTg38jcgHhyQ/2zjg7a8E6sz2zxfz21Hh17xDJYUHBZimVIt5fUyS8vbfpeSmTL3gUjTEvUV3qQ==
+  dependencies:
+    underscore ">=1.8.3"
+
+balanced-match@^1.0.0:
+  version "1.0.0"
+  resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.0.tgz#89b4d199ab2bee49de164ea02b89ce462d71b767"
+  integrity sha1-ibTRmasr7kneFk6gK4nORi1xt2c=
+
+brace-expansion@^1.1.7:
+  version "1.1.11"
+  resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-1.1.11.tgz#3c7fcbf529d87226f3d2f52b966ff5271eb441dd"
+  integrity sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==
+  dependencies:
+    balanced-match "^1.0.0"
+    concat-map "0.0.1"
+
[email protected]:
+  version "0.0.1"
+  resolved "https://registry.yarnpkg.com/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b"
+  integrity sha1-2Klr13/Wjfd5OnMDajug1UBdR3s=
+
+fs-extra@^8.1.0:
+  version "8.1.0"
+  resolved "https://registry.yarnpkg.com/fs-extra/-/fs-extra-8.1.0.tgz#49d43c45a88cd9677668cb7be1b46efdb8d2e1c0"
+  integrity sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==
+  dependencies:
+    graceful-fs "^4.2.0"
+    jsonfile "^4.0.0"
+    universalify "^0.1.0"
+
+fs.realpath@^1.0.0:
+  version "1.0.0"
+  resolved "https://registry.yarnpkg.com/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f"
+  integrity sha1-FQStJSMVjKpA20onh8sBQRmU6k8=
+
+glob@^7.0.0:
+  version "7.1.6"
+  resolved "https://registry.yarnpkg.com/glob/-/glob-7.1.6.tgz#141f33b81a7c2492e125594307480c46679278a6"
+  integrity sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==
+  dependencies:
+    fs.realpath "^1.0.0"
+    inflight "^1.0.4"
+    inherits "2"
+    minimatch "^3.0.4"
+    once "^1.3.0"
+    path-is-absolute "^1.0.0"
+
+graceful-fs@^4.1.6, graceful-fs@^4.2.0:
+  version "4.2.4"
+  resolved "https://registry.yarnpkg.com/graceful-fs/-/graceful-fs-4.2.4.tgz#2256bde14d3632958c465ebc96dc467ca07a29fb"
+  integrity sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw==
+
+handlebars@^4.7.2, handlebars@^4.7.6:
+  version "4.7.6"
+  resolved "https://registry.yarnpkg.com/handlebars/-/handlebars-4.7.6.tgz#d4c05c1baf90e9945f77aa68a7a219aa4a7df74e"
+  integrity sha512-1f2BACcBfiwAfStCKZNrUCgqNZkGsAT7UM3kkYtXuLo0KnaVfjKOyf7PRzB6++aK9STyT1Pd2ZCPe3EGOXleXA==
+  dependencies:
+    minimist "^1.2.5"
+    neo-async "^2.6.0"
+    source-map "^0.6.1"
+    wordwrap "^1.0.0"
+  optionalDependencies:
+    uglify-js "^3.1.4"
+
+highlight.js@^9.18.0:
+  version "9.18.3"
+  resolved "https://registry.yarnpkg.com/highlight.js/-/highlight.js-9.18.3.tgz#a1a0a2028d5e3149e2380f8a865ee8516703d634"
+  integrity sha512-zBZAmhSupHIl5sITeMqIJnYCDfAEc3Gdkqj65wC1lpI468MMQeeQkhcIAvk+RylAkxrCcI9xy9piHiXeQ1BdzQ==
+
+inflight@^1.0.4:
+  version "1.0.6"
+  resolved "https://registry.yarnpkg.com/inflight/-/inflight-1.0.6.tgz#49bd6331d7d02d0c09bc910a1075ba8165b56df9"
+  integrity sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk=
+  dependencies:
+    once "^1.3.0"
+    wrappy "1"
+
+inherits@2:
+  version "2.0.4"
+  resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c"
+  integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==
+
+interpret@^1.0.0:
+  version "1.4.0"
+  resolved "https://registry.yarnpkg.com/interpret/-/interpret-1.4.0.tgz#665ab8bc4da27a774a40584e812e3e0fa45b1a1e"
+  integrity sha512-agE4QfB2Lkp9uICn7BAqoscw4SZP9kTE2hxiFI3jBPmXJfdqiahTbUuKGsMoN2GtqL9AxhYioAcVvgsb1HvRbA==
+
+jquery@^3.4.1:
+  version "3.5.1"
+  resolved "https://registry.yarnpkg.com/jquery/-/jquery-3.5.1.tgz#d7b4d08e1bfdb86ad2f1a3d039ea17304717abb5"
+  integrity sha512-XwIBPqcMn57FxfT+Go5pzySnm4KWkT1Tv7gjrpT1srtf8Weynl6R273VJ5GjkRb51IzMp5nbaPjJXMWeju2MKg==
+
+jsonfile@^4.0.0:
+  version "4.0.0"
+  resolved "https://registry.yarnpkg.com/jsonfile/-/jsonfile-4.0.0.tgz#8771aae0799b64076b76640fca058f9c10e33ecb"
+  integrity sha1-h3Gq4HmbZAdrdmQPygWPnBDjPss=
+  optionalDependencies:
+    graceful-fs "^4.1.6"
+
+lodash@^4.17.15:
+  version "4.17.20"
+  resolved "https://registry.yarnpkg.com/lodash/-/lodash-4.17.20.tgz#b44a9b6297bcb698f1c51a3545a2b3b368d59c52"
+  integrity sha512-PlhdFcillOINfeV7Ni6oF1TAEayyZBoZ8bcshTHqOYJYlrqzRK5hagpagky5o4HfCzzd1TRkXPMFq6cKk9rGmA==
+
+lunr@^2.3.8:
+  version "2.3.9"
+  resolved "https://registry.yarnpkg.com/lunr/-/lunr-2.3.9.tgz#18b123142832337dd6e964df1a5a7707b25d35e1"
+  integrity sha512-zTU3DaZaF3Rt9rhN3uBMGQD3dD2/vFQqnvZCDv4dl5iOzq2IZQqTxu90r4E5J+nP70J3ilqVCrbho2eWaeW8Ow==
+
+marked@^0.8.0:
+  version "0.8.2"
+  resolved "https://registry.yarnpkg.com/marked/-/marked-0.8.2.tgz#4faad28d26ede351a7a1aaa5fec67915c869e355"
+  integrity sha512-EGwzEeCcLniFX51DhTpmTom+dSA/MG/OBUDjnWtHbEnjAH180VzUeAw+oE4+Zv+CoYBWyRlYOTR0N8SO9R1PVw==
+
+minimatch@^3.0.0, minimatch@^3.0.4:
+  version "3.0.4"
+  resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.0.4.tgz#5166e286457f03306064be5497e8dbb0c3d32083"
+  integrity sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==
+  dependencies:
+    brace-expansion "^1.1.7"
+
+minimist@^1.2.5:
+  version "1.2.5"
+  resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.5.tgz#67d66014b66a6a8aaa0c083c5fd58df4e4e97602"
+  integrity sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw==
+
+neo-async@^2.6.0:
+  version "2.6.2"
+  resolved "https://registry.yarnpkg.com/neo-async/-/neo-async-2.6.2.tgz#b4aafb93e3aeb2d8174ca53cf163ab7d7308305f"
+  integrity sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==
+
+once@^1.3.0:
+  version "1.4.0"
+  resolved "https://registry.yarnpkg.com/once/-/once-1.4.0.tgz#583b1aa775961d4b113ac17d9c50baef9dd76bd1"
+  integrity sha1-WDsap3WWHUsROsF9nFC6753Xa9E=
+  dependencies:
+    wrappy "1"
+
 pako@^1.0.11:
   version "1.0.11"
   resolved "https://registry.yarnpkg.com/pako/-/pako-1.0.11.tgz#6c9599d340d54dfd3946380252a35705a6b992bf"
   integrity sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==
 
+path-is-absolute@^1.0.0:
+  version "1.0.1"
+  resolved "https://registry.yarnpkg.com/path-is-absolute/-/path-is-absolute-1.0.1.tgz#174b9268735534ffbc7ace6bf53a5a9e1b5c5f5f"
+  integrity sha1-F0uSaHNVNP+8es5r9TpanhtcX18=
+
+path-parse@^1.0.6:
+  version "1.0.6"
+  resolved "https://registry.yarnpkg.com/path-parse/-/path-parse-1.0.6.tgz#d62dbb5679405d72c4737ec58600e9ddcf06d24c"
+  integrity sha512-GSmOT2EbHrINBf9SR7CDELwlJ8AENk3Qn7OikK4nFYAu3Ote2+JYNVvkpAEQm3/TLNEJFD/xZJjzyxg3KBWOzw==
+
+progress@^2.0.3:
+  version "2.0.3"
+  resolved "https://registry.yarnpkg.com/progress/-/progress-2.0.3.tgz#7e8cf8d8f5b8f239c1bc68beb4eb78567d572ef8"
+  integrity sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==
+
+rechoir@^0.6.2:
+  version "0.6.2"
+  resolved "https://registry.yarnpkg.com/rechoir/-/rechoir-0.6.2.tgz#85204b54dba82d5742e28c96756ef43af50e3384"
+  integrity sha1-hSBLVNuoLVdC4oyWdW70OvUOM4Q=
+  dependencies:
+    resolve "^1.1.6"
+
+resolve@^1.1.6:
+  version "1.17.0"
+  resolved "https://registry.yarnpkg.com/resolve/-/resolve-1.17.0.tgz#b25941b54968231cc2d1bb76a79cb7f2c0bf8444"
+  integrity sha512-ic+7JYiV8Vi2yzQGFWOkiZD5Z9z7O2Zhm9XMaTxdJExKasieFCr+yXZ/WmXsckHiKl12ar0y6XiXDx3m4RHn1w==
+  dependencies:
+    path-parse "^1.0.6"
+
+shelljs@^0.8.3:
+  version "0.8.4"
+  resolved "https://registry.yarnpkg.com/shelljs/-/shelljs-0.8.4.tgz#de7684feeb767f8716b326078a8a00875890e3c2"
+  integrity sha512-7gk3UZ9kOfPLIAbslLzyWeGiEqx9e3rxwZM0KE6EL8GlGwjym9Mrlx5/p33bWTu9YG6vcS4MBxYZDHYr5lr8BQ==
+  dependencies:
+    glob "^7.0.0"
+    interpret "^1.0.0"
+    rechoir "^0.6.2"
+
+source-map@^0.6.1:
+  version "0.6.1"
+  resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263"
+  integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==
+
[email protected]:
+  version "0.8.0-0"
+  resolved "https://registry.yarnpkg.com/typedoc-default-themes/-/typedoc-default-themes-0.8.0-0.tgz#80b7080837b2c9eba36c2fe06601ebe01973a0cd"
+  integrity sha512-blFWppm5aKnaPOa1tpGO9MLu+njxq7P3rtkXK4QxJBNszA+Jg7x0b+Qx0liXU1acErur6r/iZdrwxp5DUFdSXw==
+  dependencies:
+    backbone "^1.4.0"
+    jquery "^3.4.1"
+    lunr "^2.3.8"
+    underscore "^1.9.1"
+
+typedoc-plugin-markdown@^3.0.2:
+  version "3.0.2"
+  resolved "https://registry.yarnpkg.com/typedoc-plugin-markdown/-/typedoc-plugin-markdown-3.0.2.tgz#d09c41e4c9640d6236204050a30624118eb73f8f"
+  integrity sha512-EZSqvPqpNDdA1fgKbQFbz5qH5SuhnbTPL7zMjzAzBi+YeAhGAfVIgU9PVUOxzUOp7eYcDNnu1JTzdtu779E1kA==
+  dependencies:
+    handlebars "^4.7.6"
+
+typedoc@^0.17.0-3:
+  version "0.17.0-3"
+  resolved "https://registry.yarnpkg.com/typedoc/-/typedoc-0.17.0-3.tgz#91996e77427ff3a208ab76595a927ee11b75e9e8"
+  integrity sha512-DO2djkR4NHgzAWfNbJb2eQKsFMs+gOuYBXlQ8dOSCjkAK5DRI7ZywDufBGPUw7Ue9Qwi2Cw1DxLd3reDq8wFuQ==
+  dependencies:
+    "@types/minimatch" "3.0.3"
+    fs-extra "^8.1.0"
+    handlebars "^4.7.2"
+    highlight.js "^9.18.0"
+    lodash "^4.17.15"
+    marked "^0.8.0"
+    minimatch "^3.0.0"
+    progress "^2.0.3"
+    shelljs "^0.8.3"
+    typedoc-default-themes "0.8.0-0"
+
 typescript@^4.0.2:
   version "4.0.2"
   resolved "https://registry.yarnpkg.com/typescript/-/typescript-4.0.2.tgz#7ea7c88777c723c681e33bf7988be5d008d05ac2"
   integrity sha512-e4ERvRV2wb+rRZ/IQeb3jm2VxBsirQLpQhdxplZ2MEzGvDkkMmPglecnNDfSUBivMjP93vRbngYYDQqQ/78bcQ==
 
+uglify-js@^3.1.4:
+  version "3.10.4"
+  resolved "https://registry.yarnpkg.com/uglify-js/-/uglify-js-3.10.4.tgz#dd680f5687bc0d7a93b14a3482d16db6eba2bfbb"
+  integrity sha512-kBFT3U4Dcj4/pJ52vfjCSfyLyvG9VYYuGYPmrPvAxRw/i7xHiT4VvCev+uiEMcEEiu6UNB6KgWmGtSUYIWScbw==
+
+underscore@>=1.8.3, underscore@^1.9.1:
+  version "1.11.0"
+  resolved "https://registry.yarnpkg.com/underscore/-/underscore-1.11.0.tgz#dd7c23a195db34267186044649870ff1bab5929e"
+  integrity sha512-xY96SsN3NA461qIRKZ/+qox37YXPtSBswMGfiNptr+wrt6ds4HaMw23TP612fEyGekRE6LNRiLYr/aqbHXNedw==
+
+universalify@^0.1.0:
+  version "0.1.2"
+  resolved "https://registry.yarnpkg.com/universalify/-/universalify-0.1.2.tgz#b646f69be3942dabcecc9d6639c80dc105efaa66"
+  integrity sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg==
+
 uzip@^0.20200919.0:
   version "0.20200919.0"
   resolved "https://registry.yarnpkg.com/uzip/-/uzip-0.20200919.0.tgz#a4ae1d13265f086021e2e7933412b9b8d9f06155"
   integrity sha512-bdwScsEC5g17c7qZAHceJAm1TCuJl6f8JvpREkF2voFx00NlqU5yewvJrggXvIddEkxwyJ3e0DSrh6NDul/RHg==
+
+wordwrap@^1.0.0:
+  version "1.0.0"
+  resolved "https://registry.yarnpkg.com/wordwrap/-/wordwrap-1.0.0.tgz#27584810891456a4171c8d0226441ade90cbcaeb"
+  integrity sha1-J1hIEIkUVqQXHI0CJkQa3pDLyus=
+
+wrappy@1:
+  version "1.0.2"
+  resolved "https://registry.yarnpkg.com/wrappy/-/wrappy-1.0.2.tgz#b5243d8f3ec1aa35f1364605bc0d1036e30ab69f"
+  integrity sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=