Browse Source

Fix level 0; improve performance

Arjun Barrett 4 years ago
parent
commit
4be5e6ee65

+ 13 - 10
README.md

@@ -25,8 +25,8 @@ Import:
 ```js
 ```js
 import * as fflate from 'fflate';
 import * as fflate from 'fflate';
 // ALWAYS import only what you need to minimize bundle size.
 // ALWAYS import only what you need to minimize bundle size.
-// So, if you just need gzip support:
-import { gzip, gunzip } from 'fflate';
+// So, if you just need GZIP compression support:
+import { gzip } from 'fflate';
 ```
 ```
 If your environment doesn't support ES Modules (e.g. Node.js):
 If your environment doesn't support ES Modules (e.g. Node.js):
 ```js
 ```js
@@ -53,7 +53,7 @@ const massiveAgain = fflate.unzlib(notSoMassive);
 `fflate` can autodetect a compressed file's format as well:
 `fflate` can autodetect a compressed file's format as well:
 ```js
 ```js
 const compressed = new Uint8Array(
 const compressed = new Uint8Array(
-  await fetch('/unknownFormatCompressedFile').then(res => res.arrayBuffer())
+  await fetch('/GZIPorZLIBorDEFLATE').then(res => res.arrayBuffer())
 );
 );
 // Again, Node.js Buffers work too. For example, the above could instead be:
 // Again, Node.js Buffers work too. For example, the above could instead be:
 // Buffer.from('H4sIAAAAAAAA//NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=', 'base64');
 // Buffer.from('H4sIAAAAAAAA//NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=', 'base64');
@@ -67,7 +67,7 @@ const enc = new TextEncoder(), dec = new TextDecoder();
 const buf = enc.encode('Hello world!');
 const buf = enc.encode('Hello world!');
 
 
 // The default compression method is gzip
 // The default compression method is gzip
-// Increasing mem increases may increase performance at the cost of memory
+// Increasing mem may increase performance at the cost of memory
 // The mem ranges from 0 to 12, where 4 is the default
 // The mem ranges from 0 to 12, where 4 is the default
 const compressed = fflate.compress(buf, { level: 6, mem: 8 });
 const compressed = fflate.compress(buf, { level: 6, mem: 8 });
 
 
@@ -78,21 +78,23 @@ console.log(origText); // Hello world!
 ```
 ```
 Note that encoding the compressed data as a string, like in `pako`, is not nearly as efficient as binary for data transfer. However, you can do it:
 Note that encoding the compressed data as a string, like in `pako`, is not nearly as efficient as binary for data transfer. However, you can do it:
 ```js
 ```js
-const compressedDataToString = data => {
+// data to string
+const dts = data => {
   let result = '';
   let result = '';
   for (let value of data) {
   for (let value of data) {
     result += String.fromCharCode(data);
     result += String.fromCharCode(data);
   }
   }
   return result;
   return result;
 }
 }
-const stringToCompressedData = str => {
+// string to data
+const std = str => {
   let result = new Uint8Array(str.length);
   let result = new Uint8Array(str.length);
   for (let i = 0; i < str.length; ++i)
   for (let i = 0; i < str.length; ++i)
     result[i] = str.charCodeAt(i);
     result[i] = str.charCodeAt(i);
   return result.
   return result.
 }
 }
-const compressedString = compressedDataToString(fflate.compress(buf));
-const decompressed = fflate.decompress(stringToCompressedData(compressedString));
+const compressedString = dts(fflate.compress(buf));
+const decompressed = fflate.decompress(std(compressedString));
 ```
 ```
 
 
 See the [documentation](https://github.com/101arrowz/fflate/blob/master/docs/README.md) for more detailed information about the API.
 See the [documentation](https://github.com/101arrowz/fflate/blob/master/docs/README.md) for more detailed information about the API.
@@ -100,16 +102,17 @@ See the [documentation](https://github.com/101arrowz/fflate/blob/master/docs/REA
 ## What makes `fflate` so fast?
 ## What makes `fflate` so fast?
 Many JavaScript compression/decompression libraries exist. However, the most popular one, [`pako`](https://npmjs.com/package/pako), is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means poorly made, `pako` doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal for performance. Moreover, even when minified, the library is 45 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.
 Many JavaScript compression/decompression libraries exist. However, the most popular one, [`pako`](https://npmjs.com/package/pako), is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means poorly made, `pako` doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal for performance. Moreover, even when minified, the library is 45 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.
 
 
-Note that there exist some small libraries like [`tiny-inflate`](https://npmjs.com/package/tiny-inflate) for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is lackluster, typically 40% than `pako` in my tests.
+Note that there exist some small libraries like [`tiny-inflate`](https://npmjs.com/package/tiny-inflate) for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is lackluster, typically 40% worse than `pako` in my tests.
 
 
 [`UZIP.js`](https://github.com/photopea/UZIP.js) is both faster (by up to 40%) and smaller (14 kB minified) than `pako`, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use `UZIP.js`.
 [`UZIP.js`](https://github.com/photopea/UZIP.js) is both faster (by up to 40%) and smaller (14 kB minified) than `pako`, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use `UZIP.js`.
 
 
-So what makes `fflate` different? It takes the brilliant innovations of `UZIP.js` and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds, meaning that it can rival even `tiny-inflate` in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the entire build (3kB for decompression only and 5kB for compression only), is about 15% faster than `UZIP.js` or up to 60% faster than `pako`, and achieves the same or better compression ratio than the rest.
+So what makes `fflate` different? It takes the brilliant innovations of `UZIP.js` and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds through tree shaking, meaning that it can rival even `tiny-inflate` in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the entire build (3kB for decompression only and 5kB for compression only), is about 15% faster than `UZIP.js` or up to 60% faster than `pako`, and achieves the same or better compression ratio than the rest.
 
 
 Before you decide that `fflate` is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a compiled language. If you're willing to have 160 kB of extra weight and [much less browser support](https://caniuse.com/wasm), you can achieve  more performance than `fflate` with a WASM build of Zlib like [`wasm-flate`](https://www.npmjs.com/package/wasm-flate). And if you're only using Node.js, just use the [native Zlib bindings](https://nodejs.org/api/zlib.html) that offer the best performance. Though note that even against these compiled libraries, `fflate` is only around 30% slower in decompression and 10% slower in compression, and can still achieve better compression ratios!
 Before you decide that `fflate` is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a compiled language. If you're willing to have 160 kB of extra weight and [much less browser support](https://caniuse.com/wasm), you can achieve  more performance than `fflate` with a WASM build of Zlib like [`wasm-flate`](https://www.npmjs.com/package/wasm-flate). And if you're only using Node.js, just use the [native Zlib bindings](https://nodejs.org/api/zlib.html) that offer the best performance. Though note that even against these compiled libraries, `fflate` is only around 30% slower in decompression and 10% slower in compression, and can still achieve better compression ratios!
 
 
 ## Browser support
 ## Browser support
 `fflate` makes heavy use of typed arrays (`Uint8Array`, `Uint16Array`, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them [is from 2011](https://caniuse.com/typedarrays), so I wouldn't bother.
 `fflate` makes heavy use of typed arrays (`Uint8Array`, `Uint16Array`, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them [is from 2011](https://caniuse.com/typedarrays), so I wouldn't bother.
 
 
+Other than that, `fflate` is completely ES3, meaning you probably won't even need a bundler to use it.
 ## License
 ## License
 MIT
 MIT

+ 12 - 26
docs/README.md

@@ -24,8 +24,6 @@
 
 
 ▸ **decompress**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 ▸ **decompress**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 
 
-*Defined in [index.ts:774](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L774)*
-
 Expands compressed GZIP, Zlib, or raw DEFLATE data, automatically detecting the format
 Expands compressed GZIP, Zlib, or raw DEFLATE data, automatically detecting the format
 
 
 #### Parameters:
 #### Parameters:
@@ -41,18 +39,16 @@ ___
 
 
 ### deflate
 ### deflate
 
 
-▸ **deflate**(`data`: Uint8Array, `opts`: [DeflateOptions](interfaces/deflateoptions.md)): Uint8Array
-
-*Defined in [index.ts:680](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L680)*
+▸ **deflate**(`data`: Uint8Array, `opts?`: [DeflateOptions](interfaces/deflateoptions.md)): Uint8Array
 
 
 Compresses data with DEFLATE without any wrapper
 Compresses data with DEFLATE without any wrapper
 
 
 #### Parameters:
 #### Parameters:
 
 
-Name | Type | Default value | Description |
------- | ------ | ------ | ------ |
-`data` | Uint8Array | - | The data to compress |
-`opts` | [DeflateOptions](interfaces/deflateoptions.md) | {} | The compression options |
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to compress |
+`opts?` | [DeflateOptions](interfaces/deflateoptions.md) | The compression options |
 
 
 **Returns:** Uint8Array
 **Returns:** Uint8Array
 
 
@@ -62,8 +58,6 @@ ___
 
 
 ▸ **gunzip**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 ▸ **gunzip**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 
 
-*Defined in [index.ts:720](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L720)*
-
 Expands GZIP data
 Expands GZIP data
 
 
 #### Parameters:
 #### Parameters:
@@ -79,18 +73,16 @@ ___
 
 
 ### gzip
 ### gzip
 
 
-▸ **gzip**(`data`: Uint8Array, `opts`: [GZIPOptions](interfaces/gzipoptions.md)): Uint8Array
-
-*Defined in [index.ts:700](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L700)*
+▸ **gzip**(`data`: Uint8Array, `opts?`: [GZIPOptions](interfaces/gzipoptions.md)): Uint8Array
 
 
 Compresses data with GZIP
 Compresses data with GZIP
 
 
 #### Parameters:
 #### Parameters:
 
 
-Name | Type | Default value | Description |
------- | ------ | ------ | ------ |
-`data` | Uint8Array | - | The data to compress |
-`opts` | [GZIPOptions](interfaces/gzipoptions.md) | {} | The compression options |
+Name | Type | Description |
+------ | ------ | ------ |
+`data` | Uint8Array | The data to compress |
+`opts?` | [GZIPOptions](interfaces/gzipoptions.md) | The compression options |
 
 
 **Returns:** Uint8Array
 **Returns:** Uint8Array
 
 
@@ -100,8 +92,6 @@ ___
 
 
 ▸ **inflate**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 ▸ **inflate**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 
 
-*Defined in [index.ts:690](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L690)*
-
 Expands DEFLATE data with no wrapper
 Expands DEFLATE data with no wrapper
 
 
 #### Parameters:
 #### Parameters:
@@ -119,8 +109,6 @@ ___
 
 
 ▸ **unzlib**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 ▸ **unzlib**(`data`: Uint8Array, `out?`: Uint8Array): Uint8Array
 
 
-*Defined in [index.ts:758](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L758)*
-
 Expands Zlib data
 Expands Zlib data
 
 
 #### Parameters:
 #### Parameters:
@@ -136,9 +124,7 @@ ___
 
 
 ### zlib
 ### zlib
 
 
-▸ **zlib**(`data`: Uint8Array, `opts`: [ZlibOptions](interfaces/zliboptions.md)): Uint8Array
-
-*Defined in [index.ts:737](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L737)*
+▸ **zlib**(`data`: Uint8Array, `opts?`: [ZlibOptions](interfaces/zliboptions.md)): Uint8Array
 
 
 Compress data with Zlib
 Compress data with Zlib
 
 
@@ -147,6 +133,6 @@ Compress data with Zlib
 Name | Type | Description |
 Name | Type | Description |
 ------ | ------ | ------ |
 ------ | ------ | ------ |
 `data` | Uint8Array | The data to compress |
 `data` | Uint8Array | The data to compress |
-`opts` | [ZlibOptions](interfaces/zliboptions.md) | The compression options |
+`opts?` | [ZlibOptions](interfaces/zliboptions.md) | The compression options |
 
 
 **Returns:** Uint8Array
 **Returns:** Uint8Array

+ 1 - 4
docs/interfaces/deflateoptions.md

@@ -23,8 +23,6 @@ Options for compressing data into a DEFLATE format
 
 
 • `Optional` **level**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9
 • `Optional` **level**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9
 
 
-*Defined in [index.ts:632](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L632)*
-
 The level of compression to use, ranging from 0-9.
 The level of compression to use, ranging from 0-9.
 
 
 0 will store the data without compression.
 0 will store the data without compression.
@@ -45,11 +43,10 @@ ___
 
 
 • `Optional` **mem**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 \| 10 \| 11 \| 12
 • `Optional` **mem**: 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 \| 10 \| 11 \| 12
 
 
-*Defined in [index.ts:641](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L641)*
-
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 
 
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
+In addition, values above 8 tend to help very little on most data and can even hurt performance.
 
 
 The default value is automatically determined based on the size of the input data.
 The default value is automatically determined based on the size of the input data.

+ 1 - 8
docs/interfaces/gzipoptions.md

@@ -23,8 +23,6 @@ Options for compressing data into a GZIP format
 
 
 • `Optional` **filename**: string
 • `Optional` **filename**: string
 
 
-*Defined in [index.ts:657](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L657)*
-
 The filename of the data. If the `gunzip` command is used to decompress the data, it will output a file
 The filename of the data. If the `gunzip` command is used to decompress the data, it will output a file
 with this name instead of the name of the compressed file.
 with this name instead of the name of the compressed file.
 
 
@@ -36,8 +34,6 @@ ___
 
 
 *Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
 *Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
 
 
-*Defined in [index.ts:632](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L632)*
-
 The level of compression to use, ranging from 0-9.
 The level of compression to use, ranging from 0-9.
 
 
 0 will store the data without compression.
 0 will store the data without compression.
@@ -60,12 +56,11 @@ ___
 
 
 *Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
 *Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
 
 
-*Defined in [index.ts:641](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L641)*
-
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 
 
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
+In addition, values above 8 tend to help very little on most data and can even hurt performance.
 
 
 The default value is automatically determined based on the size of the input data.
 The default value is automatically determined based on the size of the input data.
 
 
@@ -75,7 +70,5 @@ ___
 
 
 • `Optional` **mtime**: Date \| string \| number
 • `Optional` **mtime**: Date \| string \| number
 
 
-*Defined in [index.ts:652](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L652)*
-
 When the file was last modified. Defaults to the current time.
 When the file was last modified. Defaults to the current time.
 Set this to 0 to avoid specifying a modification date entirely.
 Set this to 0 to avoid specifying a modification date entirely.

+ 1 - 4
docs/interfaces/zliboptions.md

@@ -23,8 +23,6 @@ Options for compressing data into a Zlib format
 
 
 *Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
 *Inherited from [DeflateOptions](deflateoptions.md).[level](deflateoptions.md#level)*
 
 
-*Defined in [index.ts:632](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L632)*
-
 The level of compression to use, ranging from 0-9.
 The level of compression to use, ranging from 0-9.
 
 
 0 will store the data without compression.
 0 will store the data without compression.
@@ -47,11 +45,10 @@ ___
 
 
 *Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
 *Inherited from [DeflateOptions](deflateoptions.md).[mem](deflateoptions.md#mem)*
 
 
-*Defined in [index.ts:641](https://github.com/101arrowz/fflate/blob/5c43980/src/index.ts#L641)*
-
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 The memory level to use, ranging from 0-12. Increasing this increases speed and compression ratio at the cost of memory.
 
 
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
 It is recommended not to lower the value below 4, since that tends to hurt performance.
+In addition, values above 8 tend to help very little on most data and can even hurt performance.
 
 
 The default value is automatically determined based on the size of the input data.
 The default value is automatically determined based on the size of the input data.

+ 4 - 2
package.json

@@ -1,6 +1,6 @@
 {
 {
   "name": "fflate",
   "name": "fflate",
-  "version": "0.0.3",
+  "version": "0.0.4",
   "description": "High performance (de)compression in an 8kB package",
   "description": "High performance (de)compression in an 8kB package",
   "main": "lib/index.js",
   "main": "lib/index.js",
   "module": "esm/index.js",
   "module": "esm/index.js",
@@ -23,7 +23,9 @@
     "tiny"
     "tiny"
   ],
   ],
   "scripts": {
   "scripts": {
-    "build": "tsc && tsc --project tsconfig.esm.json && typedoc --mode library --plugin typedoc-plugin-markdown --hideProjectName --hideBreadcrumbs --readme none",
+    "build": "yarn build:lib && yarn build:docs",
+    "build:lib": "tsc && tsc --project tsconfig.esm.json",
+    "build:docs": "typedoc --mode library --plugin typedoc-plugin-markdown --hideProjectName --hideBreadcrumbs --readme none --disableSources",
     "prepare": "yarn build"
     "prepare": "yarn build"
   },
   },
   "devDependencies": {
   "devDependencies": {

+ 8 - 7
src/index.ts

@@ -147,7 +147,7 @@ const inflt = (dat: Uint8Array, buf?: Uint8Array) => {
     if (l > bl) {
     if (l > bl) {
       // Double or set to necessary, whichever is greater
       // Double or set to necessary, whichever is greater
       const nbuf = new u8(Math.max(bl << 1, l));
       const nbuf = new u8(Math.max(bl << 1, l));
-      nbuf.set(buf);
+      for (let i = 0; i < bl; ++i) nbuf[i] = buf[i];
       buf = nbuf;
       buf = nbuf;
     }
     }
   }
   }
@@ -426,7 +426,7 @@ const wfblk = (out: Uint8Array, pos: number, dat: Uint8Array) => {
   out[o + 2] = s >>> 8;
   out[o + 2] = s >>> 8;
   out[o + 3] = out[o + 1] ^ 255;
   out[o + 3] = out[o + 1] ^ 255;
   out[o + 4] = out[o + 2] ^ 255;
   out[o + 4] = out[o + 2] ^ 255;
-  out.set(dat, o + 5);
+  for (let i = 0; i < s; ++i) out[o + i + 5] = dat[i];
   return (o + 4 + s) << 3;
   return (o + 4 + s) << 3;
 }
 }
 
 
@@ -635,6 +635,7 @@ export interface DeflateOptions {
    * 
    * 
    * Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
    * Note that this is exponential: while level 0 uses 4 kB, level 4 uses 64 kB, level 8 uses 1 MB, and level 12 uses 16 MB.
    * It is recommended not to lower the value below 4, since that tends to hurt performance.
    * It is recommended not to lower the value below 4, since that tends to hurt performance.
+   * In addition, values above 8 tend to help very little on most data and can even hurt performance.
    * 
    * 
    * The default value is automatically determined based on the size of the input data.
    * The default value is automatically determined based on the size of the input data.
    */
    */
@@ -663,8 +664,8 @@ export interface GZIPOptions extends DeflateOptions {
 export interface ZlibOptions extends DeflateOptions {}
 export interface ZlibOptions extends DeflateOptions {}
 
 
 // deflate with opts
 // deflate with opts
-const dopt = (dat: Uint8Array, opt: DeflateOptions, pre: number, post: number) =>
-  dflt(dat, opt.level || 6, 12 + (opt.mem || 4), pre, post);
+const dopt = (dat: Uint8Array, opt: DeflateOptions = {}, pre: number, post: number) =>
+  dflt(dat, opt.level == null ? 6 : opt.level, opt.mem == null ? Math.ceil(Math.max(8, Math.min(13, Math.log(dat.length))) * 1.5) : (12 + opt.mem), pre, post);
 
 
 // write bytes
 // write bytes
 const wbytes = (d: Uint8Array, b: number, v: number) => {
 const wbytes = (d: Uint8Array, b: number, v: number) => {
@@ -677,7 +678,7 @@ const wbytes = (d: Uint8Array, b: number, v: number) => {
  * @param opts The compression options
  * @param opts The compression options
  * @returns The deflated version of the data
  * @returns The deflated version of the data
  */
  */
-export function deflate(data: Uint8Array, opts: DeflateOptions = {}) {
+export function deflate(data: Uint8Array, opts?: DeflateOptions) {
   return dopt(data, opts, 0, 0);
   return dopt(data, opts, 0, 0);
 }
 }
 
 
@@ -697,7 +698,7 @@ export function inflate(data: Uint8Array, out?: Uint8Array) {
  * @param opts The compression options
  * @param opts The compression options
  * @returns The gzipped version of the data
  * @returns The gzipped version of the data
  */
  */
-export function gzip(data: Uint8Array, opts: GZIPOptions = {}) {
+export function gzip(data: Uint8Array, opts?: GZIPOptions) {
   const fn = opts.filename;
   const fn = opts.filename;
   const l = data.length, raw = dopt(data, opts, 10 + ((fn && fn.length + 1) || 0), 8), s = raw.length;
   const l = data.length, raw = dopt(data, opts, 10 + ((fn && fn.length + 1) || 0), 8), s = raw.length;
   raw[0] = 31, raw[1] = 139, raw[2] = 8, raw[8] = opts.level == 0 ? 4 : opts.level == 9 ? 2 : 3, raw[9] = 255;
   raw[0] = 31, raw[1] = 139, raw[2] = 8, raw[8] = opts.level == 0 ? 4 : opts.level == 9 ? 2 : 3, raw[9] = 255;
@@ -734,7 +735,7 @@ export function gunzip(data: Uint8Array, out?: Uint8Array) {
  * @param opts The compression options
  * @param opts The compression options
  * @returns The zlib-compressed version of the data
  * @returns The zlib-compressed version of the data
  */
  */
-export function zlib(data: Uint8Array, opts: ZlibOptions) {
+export function zlib(data: Uint8Array, opts?: ZlibOptions) {
   const l = data.length, raw = dopt(data, opts, 2, 4), s = raw.length;
   const l = data.length, raw = dopt(data, opts, 2, 4), s = raw.length;
   const lv = opts.level, fl = lv == 0 ? 0 : lv < 6 ? 1 : lv == 9 ? 3 : 2;
   const lv = opts.level, fl = lv == 0 ? 0 : lv < 6 ? 1 : lv == 9 ? 3 : 2;
   raw[0] = 120, raw[1] = (fl << 6) | (fl ? (32 - 2 * fl) : 1);
   raw[0] = 120, raw[1] = (fl << 6) | (fl ? (32 - 2 * fl) : 1);

+ 1 - 1
tsconfig.esm.json

@@ -2,7 +2,7 @@
   "extends": "./tsconfig.json",
   "extends": "./tsconfig.json",
   "compilerOptions": {
   "compilerOptions": {
     "declaration": false,
     "declaration": false,
-    "target": "ESNext",
+    "module": "ESNext",
     "outDir": "esm"
     "outDir": "esm"
   }
   }
 }
 }