Updated the files.

This commit is contained in:
Batuhan Berk Başoğlu 2024-02-08 19:38:41 -05:00
parent 1553e6b971
commit 753967d4f5
23418 changed files with 3784666 additions and 0 deletions

13
my-app/node_modules/hosted-git-info/LICENSE generated vendored Executable file
View file

@ -0,0 +1,13 @@
Copyright (c) 2015, Rebecca Turner
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

133
my-app/node_modules/hosted-git-info/README.md generated vendored Executable file
View file

@ -0,0 +1,133 @@
# hosted-git-info
This will let you identify and transform various git hosts URLs between
protocols. It also can tell you what the URL is for the raw path for
particular file for direct access without git.
## Example
```javascript
const hostedGitInfo = require("hosted-git-info")
const info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts)
/* info looks like:
{
type: "github",
domain: "github.com",
user: "npm",
project: "hosted-git-info"
}
*/
```
If the URL can't be matched with a git host, `null` will be returned. We
can match git, ssh and https urls. Additionally, we can match ssh connect
strings (`git@github.com:npm/hosted-git-info`) and shortcuts (eg,
`github:npm/hosted-git-info`). GitHub specifically, is detected in the case
of a third, unprefixed, form: `npm/hosted-git-info`.
If it does match, the returned object has properties of:
* info.type -- The short name of the service
* info.domain -- The domain for git protocol use
* info.user -- The name of the user/org on the git host
* info.project -- The name of the project on the git host
## Version Contract
The major version will be bumped any time…
* The constructor stops accepting URLs that it previously accepted.
* A method is removed.
* A method can no longer accept the number and type of arguments it previously accepted.
* A method can return a different type than it currently returns.
Implications:
* I do not consider the specific format of the urls returned from, say
`.https()` to be a part of the contract. The contract is that it will
return a string that can be used to fetch the repo via HTTPS. But what
that string looks like, specifically, can change.
* Dropping support for a hosted git provider would constitute a breaking
change.
## Usage
### const info = hostedGitInfo.fromUrl(gitSpecifier[, options])
* *gitSpecifer* is a URL of a git repository or a SCP-style specifier of one.
* *options* is an optional object. It can have the following properties:
* *noCommittish* — If true then committishes won't be included in generated URLs.
* *noGitPlus* — If true then `git+` won't be prefixed on URLs.
## Methods
All of the methods take the same options as the `fromUrl` factory. Options
provided to a method override those provided to the constructor.
* info.file(path, opts)
Given the path of a file relative to the repository, returns a URL for
directly fetching it from the githost. If no committish was set then
`HEAD` will be used as the default.
For example `hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json")`
would return `https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json`
* info.shortcut(opts)
eg, `github:npm/hosted-git-info`
* info.browse(path, fragment, opts)
eg, `https://github.com/npm/hosted-git-info/tree/v1.2.0`,
`https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json`,
`https://github.com/npm/hosted-git-info/tree/v1.2.0/README.md#supported-hosts`
* info.bugs(opts)
eg, `https://github.com/npm/hosted-git-info/issues`
* info.docs(opts)
eg, `https://github.com/npm/hosted-git-info/tree/v1.2.0#readme`
* info.https(opts)
eg, `git+https://github.com/npm/hosted-git-info.git`
* info.sshurl(opts)
eg, `git+ssh://git@github.com/npm/hosted-git-info.git`
* info.ssh(opts)
eg, `git@github.com:npm/hosted-git-info.git`
* info.path(opts)
eg, `npm/hosted-git-info`
* info.tarball(opts)
eg, `https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz`
* info.getDefaultRepresentation()
Returns the default output type. The default output type is based on the
string you passed in to be parsed
* info.toString(opts)
Uses the getDefaultRepresentation to call one of the other methods to get a URL for
this resource. As such `hostedGitInfo.fromUrl(url).toString()` will give
you a normalized version of the URL that still uses the same protocol.
Shortcuts will still be returned as shortcuts, but the special case github
form of `org/project` will be normalized to `github:org/project`.
SSH connect strings will be normalized into `git+ssh` URLs.
## Supported hosts
Currently this supports GitHub (including Gists), Bitbucket, GitLab and Sourcehut.
Pull requests for additional hosts welcome.

122
my-app/node_modules/hosted-git-info/lib/from-url.js generated vendored Executable file
View file

@ -0,0 +1,122 @@
'use strict'
const parseUrl = require('./parse-url')
// look for github shorthand inputs, such as npm/cli
const isGitHubShorthand = (arg) => {
// it cannot contain whitespace before the first #
// it cannot start with a / because that's probably an absolute file path
// but it must include a slash since repos are username/repository
// it cannot start with a . because that's probably a relative file path
// it cannot start with an @ because that's a scoped package if it passes the other tests
// it cannot contain a : before a # because that tells us that there's a protocol
// a second / may not exist before a #
const firstHash = arg.indexOf('#')
const firstSlash = arg.indexOf('/')
const secondSlash = arg.indexOf('/', firstSlash + 1)
const firstColon = arg.indexOf(':')
const firstSpace = /\s/.exec(arg)
const firstAt = arg.indexOf('@')
const spaceOnlyAfterHash = !firstSpace || (firstHash > -1 && firstSpace.index > firstHash)
const atOnlyAfterHash = firstAt === -1 || (firstHash > -1 && firstAt > firstHash)
const colonOnlyAfterHash = firstColon === -1 || (firstHash > -1 && firstColon > firstHash)
const secondSlashOnlyAfterHash = secondSlash === -1 || (firstHash > -1 && secondSlash > firstHash)
const hasSlash = firstSlash > 0
// if a # is found, what we really want to know is that the character
// immediately before # is not a /
const doesNotEndWithSlash = firstHash > -1 ? arg[firstHash - 1] !== '/' : !arg.endsWith('/')
const doesNotStartWithDot = !arg.startsWith('.')
return spaceOnlyAfterHash && hasSlash && doesNotEndWithSlash &&
doesNotStartWithDot && atOnlyAfterHash && colonOnlyAfterHash &&
secondSlashOnlyAfterHash
}
module.exports = (giturl, opts, { gitHosts, protocols }) => {
if (!giturl) {
return
}
const correctedUrl = isGitHubShorthand(giturl) ? `github:${giturl}` : giturl
const parsed = parseUrl(correctedUrl, protocols)
if (!parsed) {
return
}
const gitHostShortcut = gitHosts.byShortcut[parsed.protocol]
const gitHostDomain = gitHosts.byDomain[parsed.hostname.startsWith('www.')
? parsed.hostname.slice(4)
: parsed.hostname]
const gitHostName = gitHostShortcut || gitHostDomain
if (!gitHostName) {
return
}
const gitHostInfo = gitHosts[gitHostShortcut || gitHostDomain]
let auth = null
if (protocols[parsed.protocol]?.auth && (parsed.username || parsed.password)) {
auth = `${parsed.username}${parsed.password ? ':' + parsed.password : ''}`
}
let committish = null
let user = null
let project = null
let defaultRepresentation = null
try {
if (gitHostShortcut) {
let pathname = parsed.pathname.startsWith('/') ? parsed.pathname.slice(1) : parsed.pathname
const firstAt = pathname.indexOf('@')
// we ignore auth for shortcuts, so just trim it out
if (firstAt > -1) {
pathname = pathname.slice(firstAt + 1)
}
const lastSlash = pathname.lastIndexOf('/')
if (lastSlash > -1) {
user = decodeURIComponent(pathname.slice(0, lastSlash))
// we want nulls only, never empty strings
if (!user) {
user = null
}
project = decodeURIComponent(pathname.slice(lastSlash + 1))
} else {
project = decodeURIComponent(pathname)
}
if (project.endsWith('.git')) {
project = project.slice(0, -4)
}
if (parsed.hash) {
committish = decodeURIComponent(parsed.hash.slice(1))
}
defaultRepresentation = 'shortcut'
} else {
if (!gitHostInfo.protocols.includes(parsed.protocol)) {
return
}
const segments = gitHostInfo.extract(parsed)
if (!segments) {
return
}
user = segments.user && decodeURIComponent(segments.user)
project = decodeURIComponent(segments.project)
committish = decodeURIComponent(segments.committish)
defaultRepresentation = protocols[parsed.protocol]?.name || parsed.protocol.slice(0, -1)
}
} catch (err) {
/* istanbul ignore else */
if (err instanceof URIError) {
return
} else {
throw err
}
}
return [gitHostName, user, auth, project, committish, defaultRepresentation, opts]
}

227
my-app/node_modules/hosted-git-info/lib/hosts.js generated vendored Executable file
View file

@ -0,0 +1,227 @@
/* eslint-disable max-len */
'use strict'
const maybeJoin = (...args) => args.every(arg => arg) ? args.join('') : ''
const maybeEncode = (arg) => arg ? encodeURIComponent(arg) : ''
const formatHashFragment = (f) => f.toLowerCase().replace(/^\W+|\/|\W+$/g, '').replace(/\W+/g, '-')
const defaults = {
sshtemplate: ({ domain, user, project, committish }) =>
`git@${domain}:${user}/${project}.git${maybeJoin('#', committish)}`,
sshurltemplate: ({ domain, user, project, committish }) =>
`git+ssh://git@${domain}/${user}/${project}.git${maybeJoin('#', committish)}`,
edittemplate: ({ domain, user, project, committish, editpath, path }) =>
`https://${domain}/${user}/${project}${maybeJoin('/', editpath, '/', maybeEncode(committish || 'HEAD'), '/', path)}`,
browsetemplate: ({ domain, user, project, committish, treepath }) =>
`https://${domain}/${user}/${project}${maybeJoin('/', treepath, '/', maybeEncode(committish))}`,
browsetreetemplate: ({ domain, user, project, committish, treepath, path, fragment, hashformat }) =>
`https://${domain}/${user}/${project}/${treepath}/${maybeEncode(committish || 'HEAD')}/${path}${maybeJoin('#', hashformat(fragment || ''))}`,
browseblobtemplate: ({ domain, user, project, committish, blobpath, path, fragment, hashformat }) =>
`https://${domain}/${user}/${project}/${blobpath}/${maybeEncode(committish || 'HEAD')}/${path}${maybeJoin('#', hashformat(fragment || ''))}`,
docstemplate: ({ domain, user, project, treepath, committish }) =>
`https://${domain}/${user}/${project}${maybeJoin('/', treepath, '/', maybeEncode(committish))}#readme`,
httpstemplate: ({ auth, domain, user, project, committish }) =>
`git+https://${maybeJoin(auth, '@')}${domain}/${user}/${project}.git${maybeJoin('#', committish)}`,
filetemplate: ({ domain, user, project, committish, path }) =>
`https://${domain}/${user}/${project}/raw/${maybeEncode(committish || 'HEAD')}/${path}`,
shortcuttemplate: ({ type, user, project, committish }) =>
`${type}:${user}/${project}${maybeJoin('#', committish)}`,
pathtemplate: ({ user, project, committish }) =>
`${user}/${project}${maybeJoin('#', committish)}`,
bugstemplate: ({ domain, user, project }) =>
`https://${domain}/${user}/${project}/issues`,
hashformat: formatHashFragment,
}
const hosts = {}
hosts.github = {
// First two are insecure and generally shouldn't be used any more, but
// they are still supported.
protocols: ['git:', 'http:', 'git+ssh:', 'git+https:', 'ssh:', 'https:'],
domain: 'github.com',
treepath: 'tree',
blobpath: 'blob',
editpath: 'edit',
filetemplate: ({ auth, user, project, committish, path }) =>
`https://${maybeJoin(auth, '@')}raw.githubusercontent.com/${user}/${project}/${maybeEncode(committish || 'HEAD')}/${path}`,
gittemplate: ({ auth, domain, user, project, committish }) =>
`git://${maybeJoin(auth, '@')}${domain}/${user}/${project}.git${maybeJoin('#', committish)}`,
tarballtemplate: ({ domain, user, project, committish }) =>
`https://codeload.${domain}/${user}/${project}/tar.gz/${maybeEncode(committish || 'HEAD')}`,
extract: (url) => {
let [, user, project, type, committish] = url.pathname.split('/', 5)
if (type && type !== 'tree') {
return
}
if (!type) {
committish = url.hash.slice(1)
}
if (project && project.endsWith('.git')) {
project = project.slice(0, -4)
}
if (!user || !project) {
return
}
return { user, project, committish }
},
}
hosts.bitbucket = {
protocols: ['git+ssh:', 'git+https:', 'ssh:', 'https:'],
domain: 'bitbucket.org',
treepath: 'src',
blobpath: 'src',
editpath: '?mode=edit',
edittemplate: ({ domain, user, project, committish, treepath, path, editpath }) =>
`https://${domain}/${user}/${project}${maybeJoin('/', treepath, '/', maybeEncode(committish || 'HEAD'), '/', path, editpath)}`,
tarballtemplate: ({ domain, user, project, committish }) =>
`https://${domain}/${user}/${project}/get/${maybeEncode(committish || 'HEAD')}.tar.gz`,
extract: (url) => {
let [, user, project, aux] = url.pathname.split('/', 4)
if (['get'].includes(aux)) {
return
}
if (project && project.endsWith('.git')) {
project = project.slice(0, -4)
}
if (!user || !project) {
return
}
return { user, project, committish: url.hash.slice(1) }
},
}
hosts.gitlab = {
protocols: ['git+ssh:', 'git+https:', 'ssh:', 'https:'],
domain: 'gitlab.com',
treepath: 'tree',
blobpath: 'tree',
editpath: '-/edit',
httpstemplate: ({ auth, domain, user, project, committish }) =>
`git+https://${maybeJoin(auth, '@')}${domain}/${user}/${project}.git${maybeJoin('#', committish)}`,
tarballtemplate: ({ domain, user, project, committish }) =>
`https://${domain}/${user}/${project}/repository/archive.tar.gz?ref=${maybeEncode(committish || 'HEAD')}`,
extract: (url) => {
const path = url.pathname.slice(1)
if (path.includes('/-/') || path.includes('/archive.tar.gz')) {
return
}
const segments = path.split('/')
let project = segments.pop()
if (project.endsWith('.git')) {
project = project.slice(0, -4)
}
const user = segments.join('/')
if (!user || !project) {
return
}
return { user, project, committish: url.hash.slice(1) }
},
}
hosts.gist = {
protocols: ['git:', 'git+ssh:', 'git+https:', 'ssh:', 'https:'],
domain: 'gist.github.com',
editpath: 'edit',
sshtemplate: ({ domain, project, committish }) =>
`git@${domain}:${project}.git${maybeJoin('#', committish)}`,
sshurltemplate: ({ domain, project, committish }) =>
`git+ssh://git@${domain}/${project}.git${maybeJoin('#', committish)}`,
edittemplate: ({ domain, user, project, committish, editpath }) =>
`https://${domain}/${user}/${project}${maybeJoin('/', maybeEncode(committish))}/${editpath}`,
browsetemplate: ({ domain, project, committish }) =>
`https://${domain}/${project}${maybeJoin('/', maybeEncode(committish))}`,
browsetreetemplate: ({ domain, project, committish, path, hashformat }) =>
`https://${domain}/${project}${maybeJoin('/', maybeEncode(committish))}${maybeJoin('#', hashformat(path))}`,
browseblobtemplate: ({ domain, project, committish, path, hashformat }) =>
`https://${domain}/${project}${maybeJoin('/', maybeEncode(committish))}${maybeJoin('#', hashformat(path))}`,
docstemplate: ({ domain, project, committish }) =>
`https://${domain}/${project}${maybeJoin('/', maybeEncode(committish))}`,
httpstemplate: ({ domain, project, committish }) =>
`git+https://${domain}/${project}.git${maybeJoin('#', committish)}`,
filetemplate: ({ user, project, committish, path }) =>
`https://gist.githubusercontent.com/${user}/${project}/raw${maybeJoin('/', maybeEncode(committish))}/${path}`,
shortcuttemplate: ({ type, project, committish }) =>
`${type}:${project}${maybeJoin('#', committish)}`,
pathtemplate: ({ project, committish }) =>
`${project}${maybeJoin('#', committish)}`,
bugstemplate: ({ domain, project }) =>
`https://${domain}/${project}`,
gittemplate: ({ domain, project, committish }) =>
`git://${domain}/${project}.git${maybeJoin('#', committish)}`,
tarballtemplate: ({ project, committish }) =>
`https://codeload.github.com/gist/${project}/tar.gz/${maybeEncode(committish || 'HEAD')}`,
extract: (url) => {
let [, user, project, aux] = url.pathname.split('/', 4)
if (aux === 'raw') {
return
}
if (!project) {
if (!user) {
return
}
project = user
user = null
}
if (project.endsWith('.git')) {
project = project.slice(0, -4)
}
return { user, project, committish: url.hash.slice(1) }
},
hashformat: function (fragment) {
return fragment && 'file-' + formatHashFragment(fragment)
},
}
hosts.sourcehut = {
protocols: ['git+ssh:', 'https:'],
domain: 'git.sr.ht',
treepath: 'tree',
blobpath: 'tree',
filetemplate: ({ domain, user, project, committish, path }) =>
`https://${domain}/${user}/${project}/blob/${maybeEncode(committish) || 'HEAD'}/${path}`,
httpstemplate: ({ domain, user, project, committish }) =>
`https://${domain}/${user}/${project}.git${maybeJoin('#', committish)}`,
tarballtemplate: ({ domain, user, project, committish }) =>
`https://${domain}/${user}/${project}/archive/${maybeEncode(committish) || 'HEAD'}.tar.gz`,
bugstemplate: ({ user, project }) => null,
extract: (url) => {
let [, user, project, aux] = url.pathname.split('/', 4)
// tarball url
if (['archive'].includes(aux)) {
return
}
if (project && project.endsWith('.git')) {
project = project.slice(0, -4)
}
if (!user || !project) {
return
}
return { user, project, committish: url.hash.slice(1) }
},
}
for (const [name, host] of Object.entries(hosts)) {
hosts[name] = Object.assign({}, defaults, host)
}
module.exports = hosts

179
my-app/node_modules/hosted-git-info/lib/index.js generated vendored Executable file
View file

@ -0,0 +1,179 @@
'use strict'
const { LRUCache } = require('lru-cache')
const hosts = require('./hosts.js')
const fromUrl = require('./from-url.js')
const parseUrl = require('./parse-url.js')
const cache = new LRUCache({ max: 1000 })
class GitHost {
constructor (type, user, auth, project, committish, defaultRepresentation, opts = {}) {
Object.assign(this, GitHost.#gitHosts[type], {
type,
user,
auth,
project,
committish,
default: defaultRepresentation,
opts,
})
}
static #gitHosts = { byShortcut: {}, byDomain: {} }
static #protocols = {
'git+ssh:': { name: 'sshurl' },
'ssh:': { name: 'sshurl' },
'git+https:': { name: 'https', auth: true },
'git:': { auth: true },
'http:': { auth: true },
'https:': { auth: true },
'git+http:': { auth: true },
}
static addHost (name, host) {
GitHost.#gitHosts[name] = host
GitHost.#gitHosts.byDomain[host.domain] = name
GitHost.#gitHosts.byShortcut[`${name}:`] = name
GitHost.#protocols[`${name}:`] = { name }
}
static fromUrl (giturl, opts) {
if (typeof giturl !== 'string') {
return
}
const key = giturl + JSON.stringify(opts || {})
if (!cache.has(key)) {
const hostArgs = fromUrl(giturl, opts, {
gitHosts: GitHost.#gitHosts,
protocols: GitHost.#protocols,
})
cache.set(key, hostArgs ? new GitHost(...hostArgs) : undefined)
}
return cache.get(key)
}
static parseUrl (url) {
return parseUrl(url)
}
#fill (template, opts) {
if (typeof template !== 'function') {
return null
}
const options = { ...this, ...this.opts, ...opts }
// the path should always be set so we don't end up with 'undefined' in urls
if (!options.path) {
options.path = ''
}
// template functions will insert the leading slash themselves
if (options.path.startsWith('/')) {
options.path = options.path.slice(1)
}
if (options.noCommittish) {
options.committish = null
}
const result = template(options)
return options.noGitPlus && result.startsWith('git+') ? result.slice(4) : result
}
hash () {
return this.committish ? `#${this.committish}` : ''
}
ssh (opts) {
return this.#fill(this.sshtemplate, opts)
}
sshurl (opts) {
return this.#fill(this.sshurltemplate, opts)
}
browse (path, ...args) {
// not a string, treat path as opts
if (typeof path !== 'string') {
return this.#fill(this.browsetemplate, path)
}
if (typeof args[0] !== 'string') {
return this.#fill(this.browsetreetemplate, { ...args[0], path })
}
return this.#fill(this.browsetreetemplate, { ...args[1], fragment: args[0], path })
}
// If the path is known to be a file, then browseFile should be used. For some hosts
// the url is the same as browse, but for others like GitHub a file can use both `/tree/`
// and `/blob/` in the path. When using a default committish of `HEAD` then the `/tree/`
// path will redirect to a specific commit. Using the `/blob/` path avoids this and
// does not redirect to a different commit.
browseFile (path, ...args) {
if (typeof args[0] !== 'string') {
return this.#fill(this.browseblobtemplate, { ...args[0], path })
}
return this.#fill(this.browseblobtemplate, { ...args[1], fragment: args[0], path })
}
docs (opts) {
return this.#fill(this.docstemplate, opts)
}
bugs (opts) {
return this.#fill(this.bugstemplate, opts)
}
https (opts) {
return this.#fill(this.httpstemplate, opts)
}
git (opts) {
return this.#fill(this.gittemplate, opts)
}
shortcut (opts) {
return this.#fill(this.shortcuttemplate, opts)
}
path (opts) {
return this.#fill(this.pathtemplate, opts)
}
tarball (opts) {
return this.#fill(this.tarballtemplate, { ...opts, noCommittish: false })
}
file (path, opts) {
return this.#fill(this.filetemplate, { ...opts, path })
}
edit (path, opts) {
return this.#fill(this.edittemplate, { ...opts, path })
}
getDefaultRepresentation () {
return this.default
}
toString (opts) {
if (this.default && typeof this[this.default] === 'function') {
return this[this.default](opts)
}
return this.sshurl(opts)
}
}
for (const [name, host] of Object.entries(hosts)) {
GitHost.addHost(name, host)
}
module.exports = GitHost

78
my-app/node_modules/hosted-git-info/lib/parse-url.js generated vendored Executable file
View file

@ -0,0 +1,78 @@
const url = require('url')
const lastIndexOfBefore = (str, char, beforeChar) => {
const startPosition = str.indexOf(beforeChar)
return str.lastIndexOf(char, startPosition > -1 ? startPosition : Infinity)
}
const safeUrl = (u) => {
try {
return new url.URL(u)
} catch {
// this fn should never throw
}
}
// accepts input like git:github.com:user/repo and inserts the // after the first :
const correctProtocol = (arg, protocols) => {
const firstColon = arg.indexOf(':')
const proto = arg.slice(0, firstColon + 1)
if (Object.prototype.hasOwnProperty.call(protocols, proto)) {
return arg
}
const firstAt = arg.indexOf('@')
if (firstAt > -1) {
if (firstAt > firstColon) {
return `git+ssh://${arg}`
} else {
return arg
}
}
const doubleSlash = arg.indexOf('//')
if (doubleSlash === firstColon + 1) {
return arg
}
return `${arg.slice(0, firstColon + 1)}//${arg.slice(firstColon + 1)}`
}
// attempt to correct an scp style url so that it will parse with `new URL()`
const correctUrl = (giturl) => {
// ignore @ that come after the first hash since the denotes the start
// of a committish which can contain @ characters
const firstAt = lastIndexOfBefore(giturl, '@', '#')
// ignore colons that come after the hash since that could include colons such as:
// git@github.com:user/package-2#semver:^1.0.0
const lastColonBeforeHash = lastIndexOfBefore(giturl, ':', '#')
if (lastColonBeforeHash > firstAt) {
// the last : comes after the first @ (or there is no @)
// like it would in:
// proto://hostname.com:user/repo
// username@hostname.com:user/repo
// :password@hostname.com:user/repo
// username:password@hostname.com:user/repo
// proto://username@hostname.com:user/repo
// proto://:password@hostname.com:user/repo
// proto://username:password@hostname.com:user/repo
// then we replace the last : with a / to create a valid path
giturl = giturl.slice(0, lastColonBeforeHash) + '/' + giturl.slice(lastColonBeforeHash + 1)
}
if (lastIndexOfBefore(giturl, ':', '#') === -1 && giturl.indexOf('//') === -1) {
// we have no : at all
// as it would be in:
// username@hostname.com/user/repo
// then we prepend a protocol
giturl = `git+ssh://${giturl}`
}
return giturl
}
module.exports = (giturl, protocols) => {
const withProtocol = protocols ? correctProtocol(giturl, protocols) : giturl
return safeUrl(withProtocol) || safeUrl(correctUrl(withProtocol))
}

View file

@ -0,0 +1,15 @@
The ISC License
Copyright (c) 2010-2023 Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,856 @@
/**
* @module LRUCache
*/
declare const TYPE: unique symbol;
export type PosInt = number & {
[TYPE]: 'Positive Integer';
};
export type Index = number & {
[TYPE]: 'LRUCache Index';
};
export type UintArray = Uint8Array | Uint16Array | Uint32Array;
export type NumberArray = UintArray | number[];
declare class ZeroArray extends Array<number> {
constructor(size: number);
}
export type { ZeroArray };
export type { Stack };
export type StackLike = Stack | Index[];
declare class Stack {
#private;
heap: NumberArray;
length: number;
static create(max: number): StackLike;
constructor(max: number, HeapCls: {
new (n: number): NumberArray;
});
push(n: Index): void;
pop(): Index;
}
/**
* Promise representing an in-progress {@link LRUCache#fetch} call
*/
export type BackgroundFetch<V> = Promise<V | undefined> & {
__returned: BackgroundFetch<V> | undefined;
__abortController: AbortController;
__staleWhileFetching: V | undefined;
};
export type DisposeTask<K, V> = [
value: V,
key: K,
reason: LRUCache.DisposeReason
];
export declare namespace LRUCache {
/**
* An integer greater than 0, reflecting the calculated size of items
*/
type Size = number;
/**
* Integer greater than 0, representing some number of milliseconds, or the
* time at which a TTL started counting from.
*/
type Milliseconds = number;
/**
* An integer greater than 0, reflecting a number of items
*/
type Count = number;
/**
* The reason why an item was removed from the cache, passed
* to the {@link Disposer} methods.
*/
type DisposeReason = 'evict' | 'set' | 'delete';
/**
* A method called upon item removal, passed as the
* {@link OptionsBase.dispose} and/or
* {@link OptionsBase.disposeAfter} options.
*/
type Disposer<K, V> = (value: V, key: K, reason: DisposeReason) => void;
/**
* A function that returns the effective calculated size
* of an entry in the cache.
*/
type SizeCalculator<K, V> = (value: V, key: K) => Size;
/**
* Options provided to the
* {@link OptionsBase.fetchMethod} function.
*/
interface FetcherOptions<K, V, FC = unknown> {
signal: AbortSignal;
options: FetcherFetchOptions<K, V, FC>;
/**
* Object provided in the {@link FetchOptions.context} option to
* {@link LRUCache#fetch}
*/
context: FC;
}
/**
* Status object that may be passed to {@link LRUCache#fetch},
* {@link LRUCache#get}, {@link LRUCache#set}, and {@link LRUCache#has}.
*/
interface Status<V> {
/**
* The status of a set() operation.
*
* - add: the item was not found in the cache, and was added
* - update: the item was in the cache, with the same value provided
* - replace: the item was in the cache, and replaced
* - miss: the item was not added to the cache for some reason
*/
set?: 'add' | 'update' | 'replace' | 'miss';
/**
* the ttl stored for the item, or undefined if ttls are not used.
*/
ttl?: Milliseconds;
/**
* the start time for the item, or undefined if ttls are not used.
*/
start?: Milliseconds;
/**
* The timestamp used for TTL calculation
*/
now?: Milliseconds;
/**
* the remaining ttl for the item, or undefined if ttls are not used.
*/
remainingTTL?: Milliseconds;
/**
* The calculated size for the item, if sizes are used.
*/
entrySize?: Size;
/**
* The total calculated size of the cache, if sizes are used.
*/
totalCalculatedSize?: Size;
/**
* A flag indicating that the item was not stored, due to exceeding the
* {@link OptionsBase.maxEntrySize}
*/
maxEntrySizeExceeded?: true;
/**
* The old value, specified in the case of `set:'update'` or
* `set:'replace'`
*/
oldValue?: V;
/**
* The results of a {@link LRUCache#has} operation
*
* - hit: the item was found in the cache
* - stale: the item was found in the cache, but is stale
* - miss: the item was not found in the cache
*/
has?: 'hit' | 'stale' | 'miss';
/**
* The status of a {@link LRUCache#fetch} operation.
* Note that this can change as the underlying fetch() moves through
* various states.
*
* - inflight: there is another fetch() for this key which is in process
* - get: there is no fetchMethod, so {@link LRUCache#get} was called.
* - miss: the item is not in cache, and will be fetched.
* - hit: the item is in the cache, and was resolved immediately.
* - stale: the item is in the cache, but stale.
* - refresh: the item is in the cache, and not stale, but
* {@link FetchOptions.forceRefresh} was specified.
*/
fetch?: 'get' | 'inflight' | 'miss' | 'hit' | 'stale' | 'refresh';
/**
* The {@link OptionsBase.fetchMethod} was called
*/
fetchDispatched?: true;
/**
* The cached value was updated after a successful call to
* {@link OptionsBase.fetchMethod}
*/
fetchUpdated?: true;
/**
* The reason for a fetch() rejection. Either the error raised by the
* {@link OptionsBase.fetchMethod}, or the reason for an
* AbortSignal.
*/
fetchError?: Error;
/**
* The fetch received an abort signal
*/
fetchAborted?: true;
/**
* The abort signal received was ignored, and the fetch was allowed to
* continue.
*/
fetchAbortIgnored?: true;
/**
* The fetchMethod promise resolved successfully
*/
fetchResolved?: true;
/**
* The fetchMethod promise was rejected
*/
fetchRejected?: true;
/**
* The status of a {@link LRUCache#get} operation.
*
* - fetching: The item is currently being fetched. If a previous value
* is present and allowed, that will be returned.
* - stale: The item is in the cache, and is stale.
* - hit: the item is in the cache
* - miss: the item is not in the cache
*/
get?: 'stale' | 'hit' | 'miss';
/**
* A fetch or get operation returned a stale value.
*/
returnedStale?: true;
}
/**
* options which override the options set in the LRUCache constructor
* when calling {@link LRUCache#fetch}.
*
* This is the union of {@link GetOptions} and {@link SetOptions}, plus
* {@link OptionsBase.noDeleteOnFetchRejection},
* {@link OptionsBase.allowStaleOnFetchRejection},
* {@link FetchOptions.forceRefresh}, and
* {@link FetcherOptions.context}
*
* Any of these may be modified in the {@link OptionsBase.fetchMethod}
* function, but the {@link GetOptions} fields will of course have no
* effect, as the {@link LRUCache#get} call already happened by the time
* the fetchMethod is called.
*/
interface FetcherFetchOptions<K, V, FC = unknown> extends Pick<OptionsBase<K, V, FC>, 'allowStale' | 'updateAgeOnGet' | 'noDeleteOnStaleGet' | 'sizeCalculation' | 'ttl' | 'noDisposeOnSet' | 'noUpdateTTL' | 'noDeleteOnFetchRejection' | 'allowStaleOnFetchRejection' | 'ignoreFetchAbort' | 'allowStaleOnFetchAbort'> {
status?: Status<V>;
size?: Size;
}
/**
* Options that may be passed to the {@link LRUCache#fetch} method.
*/
interface FetchOptions<K, V, FC> extends FetcherFetchOptions<K, V, FC> {
/**
* Set to true to force a re-load of the existing data, even if it
* is not yet stale.
*/
forceRefresh?: boolean;
/**
* Context provided to the {@link OptionsBase.fetchMethod} as
* the {@link FetcherOptions.context} param.
*
* If the FC type is specified as unknown (the default),
* undefined or void, then this is optional. Otherwise, it will
* be required.
*/
context?: FC;
signal?: AbortSignal;
status?: Status<V>;
}
/**
* Options provided to {@link LRUCache#fetch} when the FC type is something
* other than `unknown`, `undefined`, or `void`
*/
interface FetchOptionsWithContext<K, V, FC> extends FetchOptions<K, V, FC> {
context: FC;
}
/**
* Options provided to {@link LRUCache#fetch} when the FC type is
* `undefined` or `void`
*/
interface FetchOptionsNoContext<K, V> extends FetchOptions<K, V, undefined> {
context?: undefined;
}
/**
* Options that may be passed to the {@link LRUCache#has} method.
*/
interface HasOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'updateAgeOnHas'> {
status?: Status<V>;
}
/**
* Options that may be passed to the {@link LRUCache#get} method.
*/
interface GetOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'allowStale' | 'updateAgeOnGet' | 'noDeleteOnStaleGet'> {
status?: Status<V>;
}
/**
* Options that may be passed to the {@link LRUCache#peek} method.
*/
interface PeekOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'allowStale'> {
}
/**
* Options that may be passed to the {@link LRUCache#set} method.
*/
interface SetOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'sizeCalculation' | 'ttl' | 'noDisposeOnSet' | 'noUpdateTTL'> {
/**
* If size tracking is enabled, then setting an explicit size
* in the {@link LRUCache#set} call will prevent calling the
* {@link OptionsBase.sizeCalculation} function.
*/
size?: Size;
/**
* If TTL tracking is enabled, then setting an explicit start
* time in the {@link LRUCache#set} call will override the
* default time from `performance.now()` or `Date.now()`.
*
* Note that it must be a valid value for whichever time-tracking
* method is in use.
*/
start?: Milliseconds;
status?: Status<V>;
}
/**
* The type signature for the {@link OptionsBase.fetchMethod} option.
*/
type Fetcher<K, V, FC = unknown> = (key: K, staleValue: V | undefined, options: FetcherOptions<K, V, FC>) => Promise<V | undefined | void> | V | undefined | void;
/**
* Options which may be passed to the {@link LRUCache} constructor.
*
* Most of these may be overridden in the various options that use
* them.
*
* Despite all being technically optional, the constructor requires that
* a cache is at minimum limited by one or more of {@link OptionsBase.max},
* {@link OptionsBase.ttl}, or {@link OptionsBase.maxSize}.
*
* If {@link OptionsBase.ttl} is used alone, then it is strongly advised
* (and in fact required by the type definitions here) that the cache
* also set {@link OptionsBase.ttlAutopurge}, to prevent potentially
* unbounded storage.
*/
interface OptionsBase<K, V, FC> {
/**
* The maximum number of items to store in the cache before evicting
* old entries. This is read-only on the {@link LRUCache} instance,
* and may not be overridden.
*
* If set, then storage space will be pre-allocated at construction
* time, and the cache will perform significantly faster.
*
* Note that significantly fewer items may be stored, if
* {@link OptionsBase.maxSize} and/or {@link OptionsBase.ttl} are also
* set.
*/
max?: Count;
/**
* Max time in milliseconds for items to live in cache before they are
* considered stale. Note that stale items are NOT preemptively removed
* by default, and MAY live in the cache long after they have expired.
*
* Also, as this cache is optimized for LRU/MRU operations, some of
* the staleness/TTL checks will reduce performance, as they will incur
* overhead by deleting items.
*
* Must be an integer number of ms. If set to 0, this indicates "no TTL"
*
* @default 0
*/
ttl?: Milliseconds;
/**
* Minimum amount of time in ms in which to check for staleness.
* Defaults to 1, which means that the current time is checked
* at most once per millisecond.
*
* Set to 0 to check the current time every time staleness is tested.
* (This reduces performance, and is theoretically unnecessary.)
*
* Setting this to a higher value will improve performance somewhat
* while using ttl tracking, albeit at the expense of keeping stale
* items around a bit longer than their TTLs would indicate.
*
* @default 1
*/
ttlResolution?: Milliseconds;
/**
* Preemptively remove stale items from the cache.
* Note that this may significantly degrade performance,
* especially if the cache is storing a large number of items.
* It is almost always best to just leave the stale items in
* the cache, and let them fall out as new items are added.
*
* Note that this means that {@link OptionsBase.allowStale} is a bit
* pointless, as stale items will be deleted almost as soon as they
* expire.
*
* @default false
*/
ttlAutopurge?: boolean;
/**
* Update the age of items on {@link LRUCache#get}, renewing their TTL
*
* Has no effect if {@link OptionsBase.ttl} is not set.
*
* @default false
*/
updateAgeOnGet?: boolean;
/**
* Update the age of items on {@link LRUCache#has}, renewing their TTL
*
* Has no effect if {@link OptionsBase.ttl} is not set.
*
* @default false
*/
updateAgeOnHas?: boolean;
/**
* Allow {@link LRUCache#get} and {@link LRUCache#fetch} calls to return
* stale data, if available.
*/
allowStale?: boolean;
/**
* Function that is called on items when they are dropped from the cache.
* This can be handy if you want to close file descriptors or do other
* cleanup tasks when items are no longer accessible. Called with `key,
* value`. It's called before actually removing the item from the
* internal cache, so it is *NOT* safe to re-add them.
*
* Use {@link OptionsBase.disposeAfter} if you wish to dispose items after
* they have been full removed, when it is safe to add them back to the
* cache.
*/
dispose?: Disposer<K, V>;
/**
* The same as {@link OptionsBase.dispose}, but called *after* the entry
* is completely removed and the cache is once again in a clean state.
* It is safe to add an item right back into the cache at this point.
* However, note that it is *very* easy to inadvertently create infinite
* recursion this way.
*/
disposeAfter?: Disposer<K, V>;
/**
* Set to true to suppress calling the
* {@link OptionsBase.dispose} function if the entry key is
* still accessible within the cache.
* This may be overridden by passing an options object to
* {@link LRUCache#set}.
*/
noDisposeOnSet?: boolean;
/**
* Boolean flag to tell the cache to not update the TTL when
* setting a new value for an existing key (ie, when updating a value
* rather than inserting a new value). Note that the TTL value is
* _always_ set (if provided) when adding a new entry into the cache.
*
* Has no effect if a {@link OptionsBase.ttl} is not set.
*/
noUpdateTTL?: boolean;
/**
* If you wish to track item size, you must provide a maxSize
* note that we still will only keep up to max *actual items*,
* if max is set, so size tracking may cause fewer than max items
* to be stored. At the extreme, a single item of maxSize size
* will cause everything else in the cache to be dropped when it
* is added. Use with caution!
*
* Note also that size tracking can negatively impact performance,
* though for most cases, only minimally.
*/
maxSize?: Size;
/**
* The maximum allowed size for any single item in the cache.
*
* If a larger item is passed to {@link LRUCache#set} or returned by a
* {@link OptionsBase.fetchMethod}, then it will not be stored in the
* cache.
*/
maxEntrySize?: Size;
/**
* A function that returns a number indicating the item's size.
*
* If not provided, and {@link OptionsBase.maxSize} or
* {@link OptionsBase.maxEntrySize} are set, then all
* {@link LRUCache#set} calls **must** provide an explicit
* {@link SetOptions.size} or sizeCalculation param.
*/
sizeCalculation?: SizeCalculator<K, V>;
/**
* Method that provides the implementation for {@link LRUCache#fetch}
*/
fetchMethod?: Fetcher<K, V, FC>;
/**
* Set to true to suppress the deletion of stale data when a
* {@link OptionsBase.fetchMethod} returns a rejected promise.
*/
noDeleteOnFetchRejection?: boolean;
/**
* Do not delete stale items when they are retrieved with
* {@link LRUCache#get}.
*
* Note that the `get` return value will still be `undefined`
* unless {@link OptionsBase.allowStale} is true.
*/
noDeleteOnStaleGet?: boolean;
/**
* Set to true to allow returning stale data when a
* {@link OptionsBase.fetchMethod} throws an error or returns a rejected
* promise.
*
* This differs from using {@link OptionsBase.allowStale} in that stale
* data will ONLY be returned in the case that the
* {@link LRUCache#fetch} fails, not any other times.
*/
allowStaleOnFetchRejection?: boolean;
/**
* Set to true to return a stale value from the cache when the
* `AbortSignal` passed to the {@link OptionsBase.fetchMethod} dispatches an `'abort'`
* event, whether user-triggered, or due to internal cache behavior.
*
* Unless {@link OptionsBase.ignoreFetchAbort} is also set, the underlying
* {@link OptionsBase.fetchMethod} will still be considered canceled, and
* any value it returns will be ignored and not cached.
*
* Caveat: since fetches are aborted when a new value is explicitly
* set in the cache, this can lead to fetch returning a stale value,
* since that was the fallback value _at the moment the `fetch()` was
* initiated_, even though the new updated value is now present in
* the cache.
*
* For example:
*
* ```ts
* const cache = new LRUCache<string, any>({
* ttl: 100,
* fetchMethod: async (url, oldValue, { signal }) => {
* const res = await fetch(url, { signal })
* return await res.json()
* }
* })
* cache.set('https://example.com/', { some: 'data' })
* // 100ms go by...
* const result = cache.fetch('https://example.com/')
* cache.set('https://example.com/', { other: 'thing' })
* console.log(await result) // { some: 'data' }
* console.log(cache.get('https://example.com/')) // { other: 'thing' }
* ```
*/
allowStaleOnFetchAbort?: boolean;
/**
* Set to true to ignore the `abort` event emitted by the `AbortSignal`
* object passed to {@link OptionsBase.fetchMethod}, and still cache the
* resulting resolution value, as long as it is not `undefined`.
*
* When used on its own, this means aborted {@link LRUCache#fetch} calls are not
* immediately resolved or rejected when they are aborted, and instead
* take the full time to await.
*
* When used with {@link OptionsBase.allowStaleOnFetchAbort}, aborted
* {@link LRUCache#fetch} calls will resolve immediately to their stale
* cached value or `undefined`, and will continue to process and eventually
* update the cache when they resolve, as long as the resulting value is
* not `undefined`, thus supporting a "return stale on timeout while
* refreshing" mechanism by passing `AbortSignal.timeout(n)` as the signal.
*
* **Note**: regardless of this setting, an `abort` event _is still
* emitted on the `AbortSignal` object_, so may result in invalid results
* when passed to other underlying APIs that use AbortSignals.
*
* This may be overridden in the {@link OptionsBase.fetchMethod} or the
* call to {@link LRUCache#fetch}.
*/
ignoreFetchAbort?: boolean;
}
interface OptionsMaxLimit<K, V, FC> extends OptionsBase<K, V, FC> {
max: Count;
}
interface OptionsTTLLimit<K, V, FC> extends OptionsBase<K, V, FC> {
ttl: Milliseconds;
ttlAutopurge: boolean;
}
interface OptionsSizeLimit<K, V, FC> extends OptionsBase<K, V, FC> {
maxSize: Size;
}
/**
* The valid safe options for the {@link LRUCache} constructor
*/
type Options<K, V, FC> = OptionsMaxLimit<K, V, FC> | OptionsSizeLimit<K, V, FC> | OptionsTTLLimit<K, V, FC>;
/**
* Entry objects used by {@link LRUCache#load} and {@link LRUCache#dump},
* and returned by {@link LRUCache#info}.
*/
interface Entry<V> {
value: V;
ttl?: Milliseconds;
size?: Size;
start?: Milliseconds;
}
}
/**
* Default export, the thing you're using this module to get.
*
* All properties from the options object (with the exception of
* {@link OptionsBase.max} and {@link OptionsBase.maxSize}) are added as
* normal public members. (`max` and `maxBase` are read-only getters.)
* Changing any of these will alter the defaults for subsequent method calls,
* but is otherwise safe.
*/
export declare class LRUCache<K extends {}, V extends {}, FC = unknown> implements Map<K, V> {
#private;
/**
* {@link LRUCache.OptionsBase.ttl}
*/
ttl: LRUCache.Milliseconds;
/**
* {@link LRUCache.OptionsBase.ttlResolution}
*/
ttlResolution: LRUCache.Milliseconds;
/**
* {@link LRUCache.OptionsBase.ttlAutopurge}
*/
ttlAutopurge: boolean;
/**
* {@link LRUCache.OptionsBase.updateAgeOnGet}
*/
updateAgeOnGet: boolean;
/**
* {@link LRUCache.OptionsBase.updateAgeOnHas}
*/
updateAgeOnHas: boolean;
/**
* {@link LRUCache.OptionsBase.allowStale}
*/
allowStale: boolean;
/**
* {@link LRUCache.OptionsBase.noDisposeOnSet}
*/
noDisposeOnSet: boolean;
/**
* {@link LRUCache.OptionsBase.noUpdateTTL}
*/
noUpdateTTL: boolean;
/**
* {@link LRUCache.OptionsBase.maxEntrySize}
*/
maxEntrySize: LRUCache.Size;
/**
* {@link LRUCache.OptionsBase.sizeCalculation}
*/
sizeCalculation?: LRUCache.SizeCalculator<K, V>;
/**
* {@link LRUCache.OptionsBase.noDeleteOnFetchRejection}
*/
noDeleteOnFetchRejection: boolean;
/**
* {@link LRUCache.OptionsBase.noDeleteOnStaleGet}
*/
noDeleteOnStaleGet: boolean;
/**
* {@link LRUCache.OptionsBase.allowStaleOnFetchAbort}
*/
allowStaleOnFetchAbort: boolean;
/**
* {@link LRUCache.OptionsBase.allowStaleOnFetchRejection}
*/
allowStaleOnFetchRejection: boolean;
/**
* {@link LRUCache.OptionsBase.ignoreFetchAbort}
*/
ignoreFetchAbort: boolean;
/**
* Do not call this method unless you need to inspect the
* inner workings of the cache. If anything returned by this
* object is modified in any way, strange breakage may occur.
*
* These fields are private for a reason!
*
* @internal
*/
static unsafeExposeInternals<K extends {}, V extends {}, FC extends unknown = unknown>(c: LRUCache<K, V, FC>): {
starts: ZeroArray | undefined;
ttls: ZeroArray | undefined;
sizes: ZeroArray | undefined;
keyMap: Map<K, number>;
keyList: (K | undefined)[];
valList: (V | BackgroundFetch<V> | undefined)[];
next: NumberArray;
prev: NumberArray;
readonly head: Index;
readonly tail: Index;
free: StackLike;
isBackgroundFetch: (p: any) => boolean;
backgroundFetch: (k: K, index: number | undefined, options: LRUCache.FetchOptions<K, V, FC>, context: any) => BackgroundFetch<V>;
moveToTail: (index: number) => void;
indexes: (options?: {
allowStale: boolean;
}) => Generator<Index, void, unknown>;
rindexes: (options?: {
allowStale: boolean;
}) => Generator<Index, void, unknown>;
isStale: (index: number | undefined) => boolean;
};
/**
* {@link LRUCache.OptionsBase.max} (read-only)
*/
get max(): LRUCache.Count;
/**
* {@link LRUCache.OptionsBase.maxSize} (read-only)
*/
get maxSize(): LRUCache.Count;
/**
* The total computed size of items in the cache (read-only)
*/
get calculatedSize(): LRUCache.Size;
/**
* The number of items stored in the cache (read-only)
*/
get size(): LRUCache.Count;
/**
* {@link LRUCache.OptionsBase.fetchMethod} (read-only)
*/
get fetchMethod(): LRUCache.Fetcher<K, V, FC> | undefined;
/**
* {@link LRUCache.OptionsBase.dispose} (read-only)
*/
get dispose(): LRUCache.Disposer<K, V> | undefined;
/**
* {@link LRUCache.OptionsBase.disposeAfter} (read-only)
*/
get disposeAfter(): LRUCache.Disposer<K, V> | undefined;
constructor(options: LRUCache.Options<K, V, FC> | LRUCache<K, V, FC>);
/**
* Return the remaining TTL time for a given entry key
*/
getRemainingTTL(key: K): number;
/**
* Return a generator yielding `[key, value]` pairs,
* in order from most recently used to least recently used.
*/
entries(): Generator<[K, V], void, unknown>;
/**
* Inverse order version of {@link LRUCache.entries}
*
* Return a generator yielding `[key, value]` pairs,
* in order from least recently used to most recently used.
*/
rentries(): Generator<(K | V | BackgroundFetch<V> | undefined)[], void, unknown>;
/**
* Return a generator yielding the keys in the cache,
* in order from most recently used to least recently used.
*/
keys(): Generator<K, void, unknown>;
/**
* Inverse order version of {@link LRUCache.keys}
*
* Return a generator yielding the keys in the cache,
* in order from least recently used to most recently used.
*/
rkeys(): Generator<K, void, unknown>;
/**
* Return a generator yielding the values in the cache,
* in order from most recently used to least recently used.
*/
values(): Generator<V, void, unknown>;
/**
* Inverse order version of {@link LRUCache.values}
*
* Return a generator yielding the values in the cache,
* in order from least recently used to most recently used.
*/
rvalues(): Generator<V | BackgroundFetch<V> | undefined, void, unknown>;
/**
* Iterating over the cache itself yields the same results as
* {@link LRUCache.entries}
*/
[Symbol.iterator](): Generator<[K, V], void, unknown>;
/**
* A String value that is used in the creation of the default string description of an object.
* Called by the built-in method Object.prototype.toString.
*/
[Symbol.toStringTag]: string;
/**
* Find a value for which the supplied fn method returns a truthy value,
* similar to Array.find(). fn is called as fn(value, key, cache).
*/
find(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => boolean, getOptions?: LRUCache.GetOptions<K, V, FC>): V | undefined;
/**
* Call the supplied function on each item in the cache, in order from
* most recently used to least recently used. fn is called as
* fn(value, key, cache). Does not update age or recenty of use.
* Does not iterate over stale values.
*/
forEach(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => any, thisp?: any): void;
/**
* The same as {@link LRUCache.forEach} but items are iterated over in
* reverse order. (ie, less recently used items are iterated over first.)
*/
rforEach(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => any, thisp?: any): void;
/**
* Delete any stale entries. Returns true if anything was removed,
* false otherwise.
*/
purgeStale(): boolean;
/**
* Get the extended info about a given entry, to get its value, size, and
* TTL info simultaneously. Like {@link LRUCache#dump}, but just for a
* single key. Always returns stale values, if their info is found in the
* cache, so be sure to check for expired TTLs if relevant.
*/
info(key: K): LRUCache.Entry<V> | undefined;
/**
* Return an array of [key, {@link LRUCache.Entry}] tuples which can be
* passed to cache.load()
*/
dump(): [K, LRUCache.Entry<V>][];
/**
* Reset the cache and load in the items in entries in the order listed.
* Note that the shape of the resulting cache may be different if the
* same options are not used in both caches.
*/
load(arr: [K, LRUCache.Entry<V>][]): void;
/**
* Add a value to the cache.
*
* Note: if `undefined` is specified as a value, this is an alias for
* {@link LRUCache#delete}
*/
set(k: K, v: V | BackgroundFetch<V> | undefined, setOptions?: LRUCache.SetOptions<K, V, FC>): this;
/**
* Evict the least recently used item, returning its value or
* `undefined` if cache is empty.
*/
pop(): V | undefined;
/**
* Check if a key is in the cache, without updating the recency of use.
* Will return false if the item is stale, even though it is technically
* in the cache.
*
* Will not update item age unless
* {@link LRUCache.OptionsBase.updateAgeOnHas} is set.
*/
has(k: K, hasOptions?: LRUCache.HasOptions<K, V, FC>): boolean;
/**
* Like {@link LRUCache#get} but doesn't update recency or delete stale
* items.
*
* Returns `undefined` if the item is stale, unless
* {@link LRUCache.OptionsBase.allowStale} is set.
*/
peek(k: K, peekOptions?: LRUCache.PeekOptions<K, V, FC>): V | undefined;
/**
* Make an asynchronous cached fetch using the
* {@link LRUCache.OptionsBase.fetchMethod} function.
*
* If multiple fetches for the same key are issued, then they will all be
* coalesced into a single call to fetchMethod.
*
* Note that this means that handling options such as
* {@link LRUCache.OptionsBase.allowStaleOnFetchAbort},
* {@link LRUCache.FetchOptions.signal},
* and {@link LRUCache.OptionsBase.allowStaleOnFetchRejection} will be
* determined by the FIRST fetch() call for a given key.
*
* This is a known (fixable) shortcoming which will be addresed on when
* someone complains about it, as the fix would involve added complexity and
* may not be worth the costs for this edge case.
*/
fetch(k: K, fetchOptions: unknown extends FC ? LRUCache.FetchOptions<K, V, FC> : FC extends undefined | void ? LRUCache.FetchOptionsNoContext<K, V> : LRUCache.FetchOptionsWithContext<K, V, FC>): Promise<undefined | V>;
fetch(k: unknown extends FC ? K : FC extends undefined | void ? K : never, fetchOptions?: unknown extends FC ? LRUCache.FetchOptions<K, V, FC> : FC extends undefined | void ? LRUCache.FetchOptionsNoContext<K, V> : never): Promise<undefined | V>;
/**
* Return a value from the cache. Will update the recency of the cache
* entry found.
*
* If the key is not found, get() will return `undefined`.
*/
get(k: K, getOptions?: LRUCache.GetOptions<K, V, FC>): V | undefined;
/**
* Deletes a key out of the cache.
* Returns true if the key was deleted, false otherwise.
*/
delete(k: K): boolean;
/**
* Clear the cache entirely, throwing away all values.
*/
clear(): void;
}
//# sourceMappingURL=index.d.ts.map

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
{
"type": "commonjs"
}

View file

@ -0,0 +1,856 @@
/**
* @module LRUCache
*/
declare const TYPE: unique symbol;
export type PosInt = number & {
[TYPE]: 'Positive Integer';
};
export type Index = number & {
[TYPE]: 'LRUCache Index';
};
export type UintArray = Uint8Array | Uint16Array | Uint32Array;
export type NumberArray = UintArray | number[];
declare class ZeroArray extends Array<number> {
constructor(size: number);
}
export type { ZeroArray };
export type { Stack };
export type StackLike = Stack | Index[];
declare class Stack {
#private;
heap: NumberArray;
length: number;
static create(max: number): StackLike;
constructor(max: number, HeapCls: {
new (n: number): NumberArray;
});
push(n: Index): void;
pop(): Index;
}
/**
* Promise representing an in-progress {@link LRUCache#fetch} call
*/
export type BackgroundFetch<V> = Promise<V | undefined> & {
__returned: BackgroundFetch<V> | undefined;
__abortController: AbortController;
__staleWhileFetching: V | undefined;
};
export type DisposeTask<K, V> = [
value: V,
key: K,
reason: LRUCache.DisposeReason
];
export declare namespace LRUCache {
/**
* An integer greater than 0, reflecting the calculated size of items
*/
type Size = number;
/**
* Integer greater than 0, representing some number of milliseconds, or the
* time at which a TTL started counting from.
*/
type Milliseconds = number;
/**
* An integer greater than 0, reflecting a number of items
*/
type Count = number;
/**
* The reason why an item was removed from the cache, passed
* to the {@link Disposer} methods.
*/
type DisposeReason = 'evict' | 'set' | 'delete';
/**
* A method called upon item removal, passed as the
* {@link OptionsBase.dispose} and/or
* {@link OptionsBase.disposeAfter} options.
*/
type Disposer<K, V> = (value: V, key: K, reason: DisposeReason) => void;
/**
* A function that returns the effective calculated size
* of an entry in the cache.
*/
type SizeCalculator<K, V> = (value: V, key: K) => Size;
/**
* Options provided to the
* {@link OptionsBase.fetchMethod} function.
*/
interface FetcherOptions<K, V, FC = unknown> {
signal: AbortSignal;
options: FetcherFetchOptions<K, V, FC>;
/**
* Object provided in the {@link FetchOptions.context} option to
* {@link LRUCache#fetch}
*/
context: FC;
}
/**
* Status object that may be passed to {@link LRUCache#fetch},
* {@link LRUCache#get}, {@link LRUCache#set}, and {@link LRUCache#has}.
*/
interface Status<V> {
/**
* The status of a set() operation.
*
* - add: the item was not found in the cache, and was added
* - update: the item was in the cache, with the same value provided
* - replace: the item was in the cache, and replaced
* - miss: the item was not added to the cache for some reason
*/
set?: 'add' | 'update' | 'replace' | 'miss';
/**
* the ttl stored for the item, or undefined if ttls are not used.
*/
ttl?: Milliseconds;
/**
* the start time for the item, or undefined if ttls are not used.
*/
start?: Milliseconds;
/**
* The timestamp used for TTL calculation
*/
now?: Milliseconds;
/**
* the remaining ttl for the item, or undefined if ttls are not used.
*/
remainingTTL?: Milliseconds;
/**
* The calculated size for the item, if sizes are used.
*/
entrySize?: Size;
/**
* The total calculated size of the cache, if sizes are used.
*/
totalCalculatedSize?: Size;
/**
* A flag indicating that the item was not stored, due to exceeding the
* {@link OptionsBase.maxEntrySize}
*/
maxEntrySizeExceeded?: true;
/**
* The old value, specified in the case of `set:'update'` or
* `set:'replace'`
*/
oldValue?: V;
/**
* The results of a {@link LRUCache#has} operation
*
* - hit: the item was found in the cache
* - stale: the item was found in the cache, but is stale
* - miss: the item was not found in the cache
*/
has?: 'hit' | 'stale' | 'miss';
/**
* The status of a {@link LRUCache#fetch} operation.
* Note that this can change as the underlying fetch() moves through
* various states.
*
* - inflight: there is another fetch() for this key which is in process
* - get: there is no fetchMethod, so {@link LRUCache#get} was called.
* - miss: the item is not in cache, and will be fetched.
* - hit: the item is in the cache, and was resolved immediately.
* - stale: the item is in the cache, but stale.
* - refresh: the item is in the cache, and not stale, but
* {@link FetchOptions.forceRefresh} was specified.
*/
fetch?: 'get' | 'inflight' | 'miss' | 'hit' | 'stale' | 'refresh';
/**
* The {@link OptionsBase.fetchMethod} was called
*/
fetchDispatched?: true;
/**
* The cached value was updated after a successful call to
* {@link OptionsBase.fetchMethod}
*/
fetchUpdated?: true;
/**
* The reason for a fetch() rejection. Either the error raised by the
* {@link OptionsBase.fetchMethod}, or the reason for an
* AbortSignal.
*/
fetchError?: Error;
/**
* The fetch received an abort signal
*/
fetchAborted?: true;
/**
* The abort signal received was ignored, and the fetch was allowed to
* continue.
*/
fetchAbortIgnored?: true;
/**
* The fetchMethod promise resolved successfully
*/
fetchResolved?: true;
/**
* The fetchMethod promise was rejected
*/
fetchRejected?: true;
/**
* The status of a {@link LRUCache#get} operation.
*
* - fetching: The item is currently being fetched. If a previous value
* is present and allowed, that will be returned.
* - stale: The item is in the cache, and is stale.
* - hit: the item is in the cache
* - miss: the item is not in the cache
*/
get?: 'stale' | 'hit' | 'miss';
/**
* A fetch or get operation returned a stale value.
*/
returnedStale?: true;
}
/**
* options which override the options set in the LRUCache constructor
* when calling {@link LRUCache#fetch}.
*
* This is the union of {@link GetOptions} and {@link SetOptions}, plus
* {@link OptionsBase.noDeleteOnFetchRejection},
* {@link OptionsBase.allowStaleOnFetchRejection},
* {@link FetchOptions.forceRefresh}, and
* {@link FetcherOptions.context}
*
* Any of these may be modified in the {@link OptionsBase.fetchMethod}
* function, but the {@link GetOptions} fields will of course have no
* effect, as the {@link LRUCache#get} call already happened by the time
* the fetchMethod is called.
*/
interface FetcherFetchOptions<K, V, FC = unknown> extends Pick<OptionsBase<K, V, FC>, 'allowStale' | 'updateAgeOnGet' | 'noDeleteOnStaleGet' | 'sizeCalculation' | 'ttl' | 'noDisposeOnSet' | 'noUpdateTTL' | 'noDeleteOnFetchRejection' | 'allowStaleOnFetchRejection' | 'ignoreFetchAbort' | 'allowStaleOnFetchAbort'> {
status?: Status<V>;
size?: Size;
}
/**
* Options that may be passed to the {@link LRUCache#fetch} method.
*/
interface FetchOptions<K, V, FC> extends FetcherFetchOptions<K, V, FC> {
/**
* Set to true to force a re-load of the existing data, even if it
* is not yet stale.
*/
forceRefresh?: boolean;
/**
* Context provided to the {@link OptionsBase.fetchMethod} as
* the {@link FetcherOptions.context} param.
*
* If the FC type is specified as unknown (the default),
* undefined or void, then this is optional. Otherwise, it will
* be required.
*/
context?: FC;
signal?: AbortSignal;
status?: Status<V>;
}
/**
* Options provided to {@link LRUCache#fetch} when the FC type is something
* other than `unknown`, `undefined`, or `void`
*/
interface FetchOptionsWithContext<K, V, FC> extends FetchOptions<K, V, FC> {
context: FC;
}
/**
* Options provided to {@link LRUCache#fetch} when the FC type is
* `undefined` or `void`
*/
interface FetchOptionsNoContext<K, V> extends FetchOptions<K, V, undefined> {
context?: undefined;
}
/**
* Options that may be passed to the {@link LRUCache#has} method.
*/
interface HasOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'updateAgeOnHas'> {
status?: Status<V>;
}
/**
* Options that may be passed to the {@link LRUCache#get} method.
*/
interface GetOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'allowStale' | 'updateAgeOnGet' | 'noDeleteOnStaleGet'> {
status?: Status<V>;
}
/**
* Options that may be passed to the {@link LRUCache#peek} method.
*/
interface PeekOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'allowStale'> {
}
/**
* Options that may be passed to the {@link LRUCache#set} method.
*/
interface SetOptions<K, V, FC> extends Pick<OptionsBase<K, V, FC>, 'sizeCalculation' | 'ttl' | 'noDisposeOnSet' | 'noUpdateTTL'> {
/**
* If size tracking is enabled, then setting an explicit size
* in the {@link LRUCache#set} call will prevent calling the
* {@link OptionsBase.sizeCalculation} function.
*/
size?: Size;
/**
* If TTL tracking is enabled, then setting an explicit start
* time in the {@link LRUCache#set} call will override the
* default time from `performance.now()` or `Date.now()`.
*
* Note that it must be a valid value for whichever time-tracking
* method is in use.
*/
start?: Milliseconds;
status?: Status<V>;
}
/**
* The type signature for the {@link OptionsBase.fetchMethod} option.
*/
type Fetcher<K, V, FC = unknown> = (key: K, staleValue: V | undefined, options: FetcherOptions<K, V, FC>) => Promise<V | undefined | void> | V | undefined | void;
/**
* Options which may be passed to the {@link LRUCache} constructor.
*
* Most of these may be overridden in the various options that use
* them.
*
* Despite all being technically optional, the constructor requires that
* a cache is at minimum limited by one or more of {@link OptionsBase.max},
* {@link OptionsBase.ttl}, or {@link OptionsBase.maxSize}.
*
* If {@link OptionsBase.ttl} is used alone, then it is strongly advised
* (and in fact required by the type definitions here) that the cache
* also set {@link OptionsBase.ttlAutopurge}, to prevent potentially
* unbounded storage.
*/
interface OptionsBase<K, V, FC> {
/**
* The maximum number of items to store in the cache before evicting
* old entries. This is read-only on the {@link LRUCache} instance,
* and may not be overridden.
*
* If set, then storage space will be pre-allocated at construction
* time, and the cache will perform significantly faster.
*
* Note that significantly fewer items may be stored, if
* {@link OptionsBase.maxSize} and/or {@link OptionsBase.ttl} are also
* set.
*/
max?: Count;
/**
* Max time in milliseconds for items to live in cache before they are
* considered stale. Note that stale items are NOT preemptively removed
* by default, and MAY live in the cache long after they have expired.
*
* Also, as this cache is optimized for LRU/MRU operations, some of
* the staleness/TTL checks will reduce performance, as they will incur
* overhead by deleting items.
*
* Must be an integer number of ms. If set to 0, this indicates "no TTL"
*
* @default 0
*/
ttl?: Milliseconds;
/**
* Minimum amount of time in ms in which to check for staleness.
* Defaults to 1, which means that the current time is checked
* at most once per millisecond.
*
* Set to 0 to check the current time every time staleness is tested.
* (This reduces performance, and is theoretically unnecessary.)
*
* Setting this to a higher value will improve performance somewhat
* while using ttl tracking, albeit at the expense of keeping stale
* items around a bit longer than their TTLs would indicate.
*
* @default 1
*/
ttlResolution?: Milliseconds;
/**
* Preemptively remove stale items from the cache.
* Note that this may significantly degrade performance,
* especially if the cache is storing a large number of items.
* It is almost always best to just leave the stale items in
* the cache, and let them fall out as new items are added.
*
* Note that this means that {@link OptionsBase.allowStale} is a bit
* pointless, as stale items will be deleted almost as soon as they
* expire.
*
* @default false
*/
ttlAutopurge?: boolean;
/**
* Update the age of items on {@link LRUCache#get}, renewing their TTL
*
* Has no effect if {@link OptionsBase.ttl} is not set.
*
* @default false
*/
updateAgeOnGet?: boolean;
/**
* Update the age of items on {@link LRUCache#has}, renewing their TTL
*
* Has no effect if {@link OptionsBase.ttl} is not set.
*
* @default false
*/
updateAgeOnHas?: boolean;
/**
* Allow {@link LRUCache#get} and {@link LRUCache#fetch} calls to return
* stale data, if available.
*/
allowStale?: boolean;
/**
* Function that is called on items when they are dropped from the cache.
* This can be handy if you want to close file descriptors or do other
* cleanup tasks when items are no longer accessible. Called with `key,
* value`. It's called before actually removing the item from the
* internal cache, so it is *NOT* safe to re-add them.
*
* Use {@link OptionsBase.disposeAfter} if you wish to dispose items after
* they have been full removed, when it is safe to add them back to the
* cache.
*/
dispose?: Disposer<K, V>;
/**
* The same as {@link OptionsBase.dispose}, but called *after* the entry
* is completely removed and the cache is once again in a clean state.
* It is safe to add an item right back into the cache at this point.
* However, note that it is *very* easy to inadvertently create infinite
* recursion this way.
*/
disposeAfter?: Disposer<K, V>;
/**
* Set to true to suppress calling the
* {@link OptionsBase.dispose} function if the entry key is
* still accessible within the cache.
* This may be overridden by passing an options object to
* {@link LRUCache#set}.
*/
noDisposeOnSet?: boolean;
/**
* Boolean flag to tell the cache to not update the TTL when
* setting a new value for an existing key (ie, when updating a value
* rather than inserting a new value). Note that the TTL value is
* _always_ set (if provided) when adding a new entry into the cache.
*
* Has no effect if a {@link OptionsBase.ttl} is not set.
*/
noUpdateTTL?: boolean;
/**
* If you wish to track item size, you must provide a maxSize
* note that we still will only keep up to max *actual items*,
* if max is set, so size tracking may cause fewer than max items
* to be stored. At the extreme, a single item of maxSize size
* will cause everything else in the cache to be dropped when it
* is added. Use with caution!
*
* Note also that size tracking can negatively impact performance,
* though for most cases, only minimally.
*/
maxSize?: Size;
/**
* The maximum allowed size for any single item in the cache.
*
* If a larger item is passed to {@link LRUCache#set} or returned by a
* {@link OptionsBase.fetchMethod}, then it will not be stored in the
* cache.
*/
maxEntrySize?: Size;
/**
* A function that returns a number indicating the item's size.
*
* If not provided, and {@link OptionsBase.maxSize} or
* {@link OptionsBase.maxEntrySize} are set, then all
* {@link LRUCache#set} calls **must** provide an explicit
* {@link SetOptions.size} or sizeCalculation param.
*/
sizeCalculation?: SizeCalculator<K, V>;
/**
* Method that provides the implementation for {@link LRUCache#fetch}
*/
fetchMethod?: Fetcher<K, V, FC>;
/**
* Set to true to suppress the deletion of stale data when a
* {@link OptionsBase.fetchMethod} returns a rejected promise.
*/
noDeleteOnFetchRejection?: boolean;
/**
* Do not delete stale items when they are retrieved with
* {@link LRUCache#get}.
*
* Note that the `get` return value will still be `undefined`
* unless {@link OptionsBase.allowStale} is true.
*/
noDeleteOnStaleGet?: boolean;
/**
* Set to true to allow returning stale data when a
* {@link OptionsBase.fetchMethod} throws an error or returns a rejected
* promise.
*
* This differs from using {@link OptionsBase.allowStale} in that stale
* data will ONLY be returned in the case that the
* {@link LRUCache#fetch} fails, not any other times.
*/
allowStaleOnFetchRejection?: boolean;
/**
* Set to true to return a stale value from the cache when the
* `AbortSignal` passed to the {@link OptionsBase.fetchMethod} dispatches an `'abort'`
* event, whether user-triggered, or due to internal cache behavior.
*
* Unless {@link OptionsBase.ignoreFetchAbort} is also set, the underlying
* {@link OptionsBase.fetchMethod} will still be considered canceled, and
* any value it returns will be ignored and not cached.
*
* Caveat: since fetches are aborted when a new value is explicitly
* set in the cache, this can lead to fetch returning a stale value,
* since that was the fallback value _at the moment the `fetch()` was
* initiated_, even though the new updated value is now present in
* the cache.
*
* For example:
*
* ```ts
* const cache = new LRUCache<string, any>({
* ttl: 100,
* fetchMethod: async (url, oldValue, { signal }) => {
* const res = await fetch(url, { signal })
* return await res.json()
* }
* })
* cache.set('https://example.com/', { some: 'data' })
* // 100ms go by...
* const result = cache.fetch('https://example.com/')
* cache.set('https://example.com/', { other: 'thing' })
* console.log(await result) // { some: 'data' }
* console.log(cache.get('https://example.com/')) // { other: 'thing' }
* ```
*/
allowStaleOnFetchAbort?: boolean;
/**
* Set to true to ignore the `abort` event emitted by the `AbortSignal`
* object passed to {@link OptionsBase.fetchMethod}, and still cache the
* resulting resolution value, as long as it is not `undefined`.
*
* When used on its own, this means aborted {@link LRUCache#fetch} calls are not
* immediately resolved or rejected when they are aborted, and instead
* take the full time to await.
*
* When used with {@link OptionsBase.allowStaleOnFetchAbort}, aborted
* {@link LRUCache#fetch} calls will resolve immediately to their stale
* cached value or `undefined`, and will continue to process and eventually
* update the cache when they resolve, as long as the resulting value is
* not `undefined`, thus supporting a "return stale on timeout while
* refreshing" mechanism by passing `AbortSignal.timeout(n)` as the signal.
*
* **Note**: regardless of this setting, an `abort` event _is still
* emitted on the `AbortSignal` object_, so may result in invalid results
* when passed to other underlying APIs that use AbortSignals.
*
* This may be overridden in the {@link OptionsBase.fetchMethod} or the
* call to {@link LRUCache#fetch}.
*/
ignoreFetchAbort?: boolean;
}
interface OptionsMaxLimit<K, V, FC> extends OptionsBase<K, V, FC> {
max: Count;
}
interface OptionsTTLLimit<K, V, FC> extends OptionsBase<K, V, FC> {
ttl: Milliseconds;
ttlAutopurge: boolean;
}
interface OptionsSizeLimit<K, V, FC> extends OptionsBase<K, V, FC> {
maxSize: Size;
}
/**
* The valid safe options for the {@link LRUCache} constructor
*/
type Options<K, V, FC> = OptionsMaxLimit<K, V, FC> | OptionsSizeLimit<K, V, FC> | OptionsTTLLimit<K, V, FC>;
/**
* Entry objects used by {@link LRUCache#load} and {@link LRUCache#dump},
* and returned by {@link LRUCache#info}.
*/
interface Entry<V> {
value: V;
ttl?: Milliseconds;
size?: Size;
start?: Milliseconds;
}
}
/**
* Default export, the thing you're using this module to get.
*
* All properties from the options object (with the exception of
* {@link OptionsBase.max} and {@link OptionsBase.maxSize}) are added as
* normal public members. (`max` and `maxBase` are read-only getters.)
* Changing any of these will alter the defaults for subsequent method calls,
* but is otherwise safe.
*/
export declare class LRUCache<K extends {}, V extends {}, FC = unknown> implements Map<K, V> {
#private;
/**
* {@link LRUCache.OptionsBase.ttl}
*/
ttl: LRUCache.Milliseconds;
/**
* {@link LRUCache.OptionsBase.ttlResolution}
*/
ttlResolution: LRUCache.Milliseconds;
/**
* {@link LRUCache.OptionsBase.ttlAutopurge}
*/
ttlAutopurge: boolean;
/**
* {@link LRUCache.OptionsBase.updateAgeOnGet}
*/
updateAgeOnGet: boolean;
/**
* {@link LRUCache.OptionsBase.updateAgeOnHas}
*/
updateAgeOnHas: boolean;
/**
* {@link LRUCache.OptionsBase.allowStale}
*/
allowStale: boolean;
/**
* {@link LRUCache.OptionsBase.noDisposeOnSet}
*/
noDisposeOnSet: boolean;
/**
* {@link LRUCache.OptionsBase.noUpdateTTL}
*/
noUpdateTTL: boolean;
/**
* {@link LRUCache.OptionsBase.maxEntrySize}
*/
maxEntrySize: LRUCache.Size;
/**
* {@link LRUCache.OptionsBase.sizeCalculation}
*/
sizeCalculation?: LRUCache.SizeCalculator<K, V>;
/**
* {@link LRUCache.OptionsBase.noDeleteOnFetchRejection}
*/
noDeleteOnFetchRejection: boolean;
/**
* {@link LRUCache.OptionsBase.noDeleteOnStaleGet}
*/
noDeleteOnStaleGet: boolean;
/**
* {@link LRUCache.OptionsBase.allowStaleOnFetchAbort}
*/
allowStaleOnFetchAbort: boolean;
/**
* {@link LRUCache.OptionsBase.allowStaleOnFetchRejection}
*/
allowStaleOnFetchRejection: boolean;
/**
* {@link LRUCache.OptionsBase.ignoreFetchAbort}
*/
ignoreFetchAbort: boolean;
/**
* Do not call this method unless you need to inspect the
* inner workings of the cache. If anything returned by this
* object is modified in any way, strange breakage may occur.
*
* These fields are private for a reason!
*
* @internal
*/
static unsafeExposeInternals<K extends {}, V extends {}, FC extends unknown = unknown>(c: LRUCache<K, V, FC>): {
starts: ZeroArray | undefined;
ttls: ZeroArray | undefined;
sizes: ZeroArray | undefined;
keyMap: Map<K, number>;
keyList: (K | undefined)[];
valList: (V | BackgroundFetch<V> | undefined)[];
next: NumberArray;
prev: NumberArray;
readonly head: Index;
readonly tail: Index;
free: StackLike;
isBackgroundFetch: (p: any) => boolean;
backgroundFetch: (k: K, index: number | undefined, options: LRUCache.FetchOptions<K, V, FC>, context: any) => BackgroundFetch<V>;
moveToTail: (index: number) => void;
indexes: (options?: {
allowStale: boolean;
}) => Generator<Index, void, unknown>;
rindexes: (options?: {
allowStale: boolean;
}) => Generator<Index, void, unknown>;
isStale: (index: number | undefined) => boolean;
};
/**
* {@link LRUCache.OptionsBase.max} (read-only)
*/
get max(): LRUCache.Count;
/**
* {@link LRUCache.OptionsBase.maxSize} (read-only)
*/
get maxSize(): LRUCache.Count;
/**
* The total computed size of items in the cache (read-only)
*/
get calculatedSize(): LRUCache.Size;
/**
* The number of items stored in the cache (read-only)
*/
get size(): LRUCache.Count;
/**
* {@link LRUCache.OptionsBase.fetchMethod} (read-only)
*/
get fetchMethod(): LRUCache.Fetcher<K, V, FC> | undefined;
/**
* {@link LRUCache.OptionsBase.dispose} (read-only)
*/
get dispose(): LRUCache.Disposer<K, V> | undefined;
/**
* {@link LRUCache.OptionsBase.disposeAfter} (read-only)
*/
get disposeAfter(): LRUCache.Disposer<K, V> | undefined;
constructor(options: LRUCache.Options<K, V, FC> | LRUCache<K, V, FC>);
/**
* Return the remaining TTL time for a given entry key
*/
getRemainingTTL(key: K): number;
/**
* Return a generator yielding `[key, value]` pairs,
* in order from most recently used to least recently used.
*/
entries(): Generator<[K, V], void, unknown>;
/**
* Inverse order version of {@link LRUCache.entries}
*
* Return a generator yielding `[key, value]` pairs,
* in order from least recently used to most recently used.
*/
rentries(): Generator<(K | V | BackgroundFetch<V> | undefined)[], void, unknown>;
/**
* Return a generator yielding the keys in the cache,
* in order from most recently used to least recently used.
*/
keys(): Generator<K, void, unknown>;
/**
* Inverse order version of {@link LRUCache.keys}
*
* Return a generator yielding the keys in the cache,
* in order from least recently used to most recently used.
*/
rkeys(): Generator<K, void, unknown>;
/**
* Return a generator yielding the values in the cache,
* in order from most recently used to least recently used.
*/
values(): Generator<V, void, unknown>;
/**
* Inverse order version of {@link LRUCache.values}
*
* Return a generator yielding the values in the cache,
* in order from least recently used to most recently used.
*/
rvalues(): Generator<V | BackgroundFetch<V> | undefined, void, unknown>;
/**
* Iterating over the cache itself yields the same results as
* {@link LRUCache.entries}
*/
[Symbol.iterator](): Generator<[K, V], void, unknown>;
/**
* A String value that is used in the creation of the default string description of an object.
* Called by the built-in method Object.prototype.toString.
*/
[Symbol.toStringTag]: string;
/**
* Find a value for which the supplied fn method returns a truthy value,
* similar to Array.find(). fn is called as fn(value, key, cache).
*/
find(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => boolean, getOptions?: LRUCache.GetOptions<K, V, FC>): V | undefined;
/**
* Call the supplied function on each item in the cache, in order from
* most recently used to least recently used. fn is called as
* fn(value, key, cache). Does not update age or recenty of use.
* Does not iterate over stale values.
*/
forEach(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => any, thisp?: any): void;
/**
* The same as {@link LRUCache.forEach} but items are iterated over in
* reverse order. (ie, less recently used items are iterated over first.)
*/
rforEach(fn: (v: V, k: K, self: LRUCache<K, V, FC>) => any, thisp?: any): void;
/**
* Delete any stale entries. Returns true if anything was removed,
* false otherwise.
*/
purgeStale(): boolean;
/**
* Get the extended info about a given entry, to get its value, size, and
* TTL info simultaneously. Like {@link LRUCache#dump}, but just for a
* single key. Always returns stale values, if their info is found in the
* cache, so be sure to check for expired TTLs if relevant.
*/
info(key: K): LRUCache.Entry<V> | undefined;
/**
* Return an array of [key, {@link LRUCache.Entry}] tuples which can be
* passed to cache.load()
*/
dump(): [K, LRUCache.Entry<V>][];
/**
* Reset the cache and load in the items in entries in the order listed.
* Note that the shape of the resulting cache may be different if the
* same options are not used in both caches.
*/
load(arr: [K, LRUCache.Entry<V>][]): void;
/**
* Add a value to the cache.
*
* Note: if `undefined` is specified as a value, this is an alias for
* {@link LRUCache#delete}
*/
set(k: K, v: V | BackgroundFetch<V> | undefined, setOptions?: LRUCache.SetOptions<K, V, FC>): this;
/**
* Evict the least recently used item, returning its value or
* `undefined` if cache is empty.
*/
pop(): V | undefined;
/**
* Check if a key is in the cache, without updating the recency of use.
* Will return false if the item is stale, even though it is technically
* in the cache.
*
* Will not update item age unless
* {@link LRUCache.OptionsBase.updateAgeOnHas} is set.
*/
has(k: K, hasOptions?: LRUCache.HasOptions<K, V, FC>): boolean;
/**
* Like {@link LRUCache#get} but doesn't update recency or delete stale
* items.
*
* Returns `undefined` if the item is stale, unless
* {@link LRUCache.OptionsBase.allowStale} is set.
*/
peek(k: K, peekOptions?: LRUCache.PeekOptions<K, V, FC>): V | undefined;
/**
* Make an asynchronous cached fetch using the
* {@link LRUCache.OptionsBase.fetchMethod} function.
*
* If multiple fetches for the same key are issued, then they will all be
* coalesced into a single call to fetchMethod.
*
* Note that this means that handling options such as
* {@link LRUCache.OptionsBase.allowStaleOnFetchAbort},
* {@link LRUCache.FetchOptions.signal},
* and {@link LRUCache.OptionsBase.allowStaleOnFetchRejection} will be
* determined by the FIRST fetch() call for a given key.
*
* This is a known (fixable) shortcoming which will be addresed on when
* someone complains about it, as the fix would involve added complexity and
* may not be worth the costs for this edge case.
*/
fetch(k: K, fetchOptions: unknown extends FC ? LRUCache.FetchOptions<K, V, FC> : FC extends undefined | void ? LRUCache.FetchOptionsNoContext<K, V> : LRUCache.FetchOptionsWithContext<K, V, FC>): Promise<undefined | V>;
fetch(k: unknown extends FC ? K : FC extends undefined | void ? K : never, fetchOptions?: unknown extends FC ? LRUCache.FetchOptions<K, V, FC> : FC extends undefined | void ? LRUCache.FetchOptionsNoContext<K, V> : never): Promise<undefined | V>;
/**
* Return a value from the cache. Will update the recency of the cache
* entry found.
*
* If the key is not found, get() will return `undefined`.
*/
get(k: K, getOptions?: LRUCache.GetOptions<K, V, FC>): V | undefined;
/**
* Deletes a key out of the cache.
* Returns true if the key was deleted, false otherwise.
*/
delete(k: K): boolean;
/**
* Clear the cache entirely, throwing away all values.
*/
clear(): void;
}
//# sourceMappingURL=index.d.ts.map

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
{
"type": "module"
}

View file

@ -0,0 +1,118 @@
{
"name": "lru-cache",
"description": "A cache object that deletes the least-recently-used items.",
"version": "10.2.0",
"author": "Isaac Z. Schlueter <i@izs.me>",
"keywords": [
"mru",
"lru",
"cache"
],
"sideEffects": false,
"scripts": {
"build": "npm run prepare",
"prepare": "tshy",
"postprepare": "bash fixup.sh",
"pretest": "npm run prepare",
"presnap": "npm run prepare",
"test": "tap",
"snap": "tap",
"preversion": "npm test",
"postversion": "npm publish",
"prepublishOnly": "git push origin --follow-tags",
"format": "prettier --write .",
"typedoc": "typedoc --tsconfig ./.tshy/esm.json ./src/*.ts",
"benchmark-results-typedoc": "bash scripts/benchmark-results-typedoc.sh",
"prebenchmark": "npm run prepare",
"benchmark": "make -C benchmark",
"preprofile": "npm run prepare",
"profile": "make -C benchmark profile"
},
"main": "./dist/commonjs/index.js",
"types": "./dist/commonjs/index.d.ts",
"tshy": {
"exports": {
".": "./src/index.ts",
"./min": {
"import": {
"types": "./dist/mjs/index.d.ts",
"default": "./dist/mjs/index.min.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.min.js"
}
}
}
},
"repository": {
"type": "git",
"url": "git://github.com/isaacs/node-lru-cache.git"
},
"devDependencies": {
"@tapjs/clock": "^1.1.16",
"@types/node": "^20.2.5",
"@types/tap": "^15.0.6",
"benchmark": "^2.1.4",
"clock-mock": "^2.0.2",
"esbuild": "^0.17.11",
"eslint-config-prettier": "^8.5.0",
"marked": "^4.2.12",
"mkdirp": "^2.1.5",
"prettier": "^2.6.2",
"tap": "^18.5.7",
"tshy": "^1.8.0",
"tslib": "^2.4.0",
"typedoc": "^0.25.3",
"typescript": "^5.2.2"
},
"license": "ISC",
"files": [
"dist"
],
"engines": {
"node": "14 || >=16.14"
},
"prettier": {
"semi": false,
"printWidth": 70,
"tabWidth": 2,
"useTabs": false,
"singleQuote": true,
"jsxSingleQuote": false,
"bracketSameLine": true,
"arrowParens": "avoid",
"endOfLine": "lf"
},
"tap": {
"node-arg": [
"--expose-gc"
],
"plugin": [
"@tapjs/clock"
]
},
"exports": {
".": {
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.js"
}
},
"./min": {
"import": {
"types": "./dist/mjs/index.d.ts",
"default": "./dist/mjs/index.min.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.min.js"
}
}
},
"type": "module"
}

66
my-app/node_modules/hosted-git-info/package.json generated vendored Executable file
View file

@ -0,0 +1,66 @@
{
"name": "hosted-git-info",
"version": "7.0.1",
"description": "Provides metadata and conversions from repository urls for GitHub, Bitbucket and GitLab",
"main": "./lib/index.js",
"repository": {
"type": "git",
"url": "https://github.com/npm/hosted-git-info.git"
},
"keywords": [
"git",
"github",
"bitbucket",
"gitlab"
],
"author": "GitHub Inc.",
"license": "ISC",
"bugs": {
"url": "https://github.com/npm/hosted-git-info/issues"
},
"homepage": "https://github.com/npm/hosted-git-info",
"scripts": {
"posttest": "npm run lint",
"snap": "tap",
"test": "tap",
"test:coverage": "tap --coverage-report=html",
"lint": "eslint \"**/*.js\"",
"postlint": "template-oss-check",
"lintfix": "npm run lint -- --fix",
"template-oss-apply": "template-oss-apply --force"
},
"dependencies": {
"lru-cache": "^10.0.1"
},
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.18.0",
"tap": "^16.0.1"
},
"files": [
"bin/",
"lib/"
],
"engines": {
"node": "^16.14.0 || >=18.0.0"
},
"tap": {
"color": 1,
"coverage": true,
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.18.0",
"publish": "true",
"ciVersions": [
"16.14.0",
"16.x",
"18.0.0",
"18.x"
]
}
}