feat: Add new gcloud commands, API clients, and third-party libraries across various services.

This commit is contained in:
2026-01-01 20:26:35 +01:00
parent 5e23cbece0
commit a19e592eb7
25221 changed files with 8324611 additions and 0 deletions

View File

@@ -0,0 +1,2 @@
github: [tkem]
custom: ["https://www.paypal.me/tkem"]

View File

@@ -0,0 +1,29 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: tkem
---
Before reporting a bug, please make sure you have the latest `cachetools` version installed:
```
pip install --upgrade cachetools
```
**Describe the bug**
A clear and concise description of what the bug is.
**Expected result**
A clear and concise description of what you expected to happen.
**Actual result**
A clear and concise description of what happened instead.
**Reproduction steps**
```python
import cachetools
```

View File

@@ -0,0 +1,10 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: tkem
---
Sorry, but `cachetools` is not accepting feature requests at this time.

View File

@@ -0,0 +1,13 @@
# Security Policy
## Supported Versions
Security updates are applied only to the latest release.
## Reporting a Vulnerability
If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.
Please disclose it at [security advisory](https://github.com/tkem/cachetools/security/advisories/new).
This project is maintained by a single person on a best effort basis. As such, vulnerability reports will be investigated and fixed or disclosed as soon as possible, but there may be delays in response time due to the maintainer's other commitments.

View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

View File

@@ -0,0 +1,27 @@
name: CI
on: [push, pull_request, workflow_dispatch]
permissions:
contents: read
jobs:
main:
name: Python ${{ matrix.python }}
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
python: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12", "3.13", "pypy3.9", "pypy3.10"]
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938 # v4.2.0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # v5.2.0
with:
python-version: ${{ matrix.python }}
allow-prereleases: true
- run: python -m pip install coverage tox
- run: python -m tox
- uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 # v4.5.0
with:
name: ${{ matrix.python }}
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -0,0 +1,12 @@
*.egg-info
*.pyc
*.swp
.cache/
.coverage
.pytest_cache/
.tox/
MANIFEST
build/
dist/
docs/_build/

View File

@@ -0,0 +1,11 @@
# Configure ReadTheDocs.
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.11"
sphinx:
configuration: "docs/conf.py"

View File

@@ -0,0 +1,509 @@
v5.5.1 (2025-01-21)
===================
- Add documentation regarding caching of exceptions.
- Officially support Python 3.13.
- Update CI environment.
v5.5.0 (2024-08-18)
===================
- ``TTLCache.expire()`` returns iterable of expired ``(key, value)``
pairs.
- ``TLRUCache.expire()`` returns iterable of expired ``(key, value)``
pairs.
- Documentation improvements.
- Update CI environment.
v5.4.0 (2024-07-15)
===================
- Add the ``keys.typedmethodkey`` decorator.
- Deprecate ``MRUCache`` class.
- Deprecate ``@func.mru_cache`` decorator.
- Update CI environment.
v5.3.3 (2024-02-26)
===================
- Documentation improvements.
- Update CI environment.
v5.3.2 (2023-10-24)
===================
- Add support for Python 3.12.
- Various documentation improvements.
v5.3.1 (2023-05-27)
===================
- Depend on Python >= 3.7.
v5.3.0 (2023-01-22)
===================
- Add ``cache_info()`` function to ``@cached`` decorator.
v5.2.1 (2023-01-08)
===================
- Add support for Python 3.11.
- Correct version information in RTD documentation.
- ``badges/shields``: Change to GitHub workflow badge routes.
v5.2.0 (2022-05-29)
===================
- Add ``cachetools.keys.methodkey()``.
- Add ``cache_clear()`` function to decorators.
- Add ``src`` directory to ``sys.path`` for Sphinx autodoc.
- Modernize ``func`` wrappers.
v5.1.0 (2022-05-15)
===================
- Add cache decorator parameters as wrapper function attributes.
v5.0.0 (2021-12-21)
===================
- Require Python 3.7 or later (breaking change).
- Remove deprecated submodules (breaking change).
The ``cache``, ``fifo``, ``lfu``, ``lru``, ``mru``, ``rr`` and
``ttl`` submodules have been deleted. Therefore, statements like
``from cachetools.ttl import TTLCache``
will no longer work. Use
``from cachetools import TTLCache``
instead.
- Pass ``self`` to ``@cachedmethod`` key function (breaking change).
The ``key`` function passed to the ``@cachedmethod`` decorator is
now called as ``key(self, *args, **kwargs)``.
The default key function has been changed to ignore its first
argument, so this should only affect applications using custom key
functions with the ``@cachedmethod`` decorator.
- Change exact time of expiration in ``TTLCache`` (breaking change).
``TTLCache`` items now get expired if their expiration time is less
than *or equal to* ``timer()``. For applications using the default
``timer()``, this should be barely noticeable, but it may affect the
use of custom timers with larger tick intervals. Note that this
also implies that a ``TTLCache`` with ``ttl=0`` can no longer hold
any items, since they will expire immediately.
- Change ``Cache.__repr__()`` format (breaking change).
String representations of cache instances now use a more compact and
efficient format, e.g.
``LRUCache({1: 1, 2: 2}, maxsize=10, currsize=2)``
- Add TLRU cache implementation.
- Documentation improvements.
v4.2.4 (2021-09-30)
===================
- Add submodule shims for backward compatibility.
v4.2.3 (2021-09-29)
===================
- Add documentation and tests for using ``TTLCache`` with
``datetime``.
- Link to typeshed typing stubs.
- Flatten package file hierarchy.
v4.2.2 (2021-04-27)
===================
- Update build environment.
- Remove Python 2 remnants.
- Format code with Black.
v4.2.1 (2021-01-24)
===================
- Handle ``__missing__()`` not storing cache items.
- Clean up ``__missing__()`` example.
v4.2.0 (2020-12-10)
===================
- Add FIFO cache implementation.
- Add MRU cache implementation.
- Improve behavior of decorators in case of race conditions.
- Improve documentation regarding mutability of caches values and use
of key functions with decorators.
- Officially support Python 3.9.
v4.1.1 (2020-06-28)
===================
- Improve ``popitem()`` exception context handling.
- Replace ``float('inf')`` with ``math.inf``.
- Improve "envkey" documentation example.
v4.1.0 (2020-04-08)
===================
- Support ``user_function`` with ``cachetools.func`` decorators
(Python 3.8 compatibility).
- Support ``cache_parameters()`` with ``cachetools.func`` decorators
(Python 3.9 compatibility).
v4.0.0 (2019-12-15)
===================
- Require Python 3.5 or later.
v3.1.1 (2019-05-23)
===================
- Document how to use shared caches with ``@cachedmethod``.
- Fix pickling/unpickling of cache keys
v3.1.0 (2019-01-29)
===================
- Fix Python 3.8 compatibility issue.
- Use ``time.monotonic`` as default timer if available.
- Improve documentation regarding thread safety.
v3.0.0 (2018-11-04)
===================
- Officially support Python 3.7.
- Drop Python 3.3 support (breaking change).
- Remove ``missing`` cache constructor parameter (breaking change).
- Remove ``self`` from ``@cachedmethod`` key arguments (breaking
change).
- Add support for ``maxsize=None`` in ``cachetools.func`` decorators.
v2.1.0 (2018-05-12)
===================
- Deprecate ``missing`` cache constructor parameter.
- Handle overridden ``getsizeof()`` method in subclasses.
- Fix Python 2.7 ``RRCache`` pickling issues.
- Various documentation improvements.
v2.0.1 (2017-08-11)
===================
- Officially support Python 3.6.
- Move documentation to RTD.
- Documentation: Update import paths for key functions (courtesy of
slavkoja).
v2.0.0 (2016-10-03)
===================
- Drop Python 3.2 support (breaking change).
- Drop support for deprecated features (breaking change).
- Move key functions to separate package (breaking change).
- Accept non-integer ``maxsize`` in ``Cache.__repr__()``.
v1.1.6 (2016-04-01)
===================
- Reimplement ``LRUCache`` and ``TTLCache`` using
``collections.OrderedDict``. Note that this will break pickle
compatibility with previous versions.
- Fix ``TTLCache`` not calling ``__missing__()`` of derived classes.
- Handle ``ValueError`` in ``Cache.__missing__()`` for consistency
with caching decorators.
- Improve how ``TTLCache`` handles expired items.
- Use ``Counter.most_common()`` for ``LFUCache.popitem()``.
v1.1.5 (2015-10-25)
===================
- Refactor ``Cache`` base class. Note that this will break pickle
compatibility with previous versions.
- Clean up ``LRUCache`` and ``TTLCache`` implementations.
v1.1.4 (2015-10-24)
===================
- Refactor ``LRUCache`` and ``TTLCache`` implementations. Note that
this will break pickle compatibility with previous versions.
- Document pending removal of deprecated features.
- Minor documentation improvements.
v1.1.3 (2015-09-15)
===================
- Fix pickle tests.
v1.1.2 (2015-09-15)
===================
- Fix pickling of large ``LRUCache`` and ``TTLCache`` instances.
v1.1.1 (2015-09-07)
===================
- Improve key functions.
- Improve documentation.
- Improve unit test coverage.
v1.1.0 (2015-08-28)
===================
- Add ``@cached`` function decorator.
- Add ``hashkey`` and ``typedkey`` functions.
- Add `key` and `lock` arguments to ``@cachedmethod``.
- Set ``__wrapped__`` attributes for Python versions < 3.2.
- Move ``functools`` compatible decorators to ``cachetools.func``.
- Deprecate ``@cachedmethod`` `typed` argument.
- Deprecate `cache` attribute for ``@cachedmethod`` wrappers.
- Deprecate `getsizeof` and `lock` arguments for `cachetools.func`
decorator.
v1.0.3 (2015-06-26)
===================
- Clear cache statistics when calling ``clear_cache()``.
v1.0.2 (2015-06-18)
===================
- Allow simple cache instances to be pickled.
- Refactor ``Cache.getsizeof`` and ``Cache.missing`` default
implementation.
v1.0.1 (2015-06-06)
===================
- Code cleanup for improved PEP 8 conformance.
- Add documentation and unit tests for using ``@cachedmethod`` with
generic mutable mappings.
- Improve documentation.
v1.0.0 (2014-12-19)
===================
- Provide ``RRCache.choice`` property.
- Improve documentation.
v0.8.2 (2014-12-15)
===================
- Use a ``NestedTimer`` for ``TTLCache``.
v0.8.1 (2014-12-07)
===================
- Deprecate ``Cache.getsize()``.
v0.8.0 (2014-12-03)
===================
- Ignore ``ValueError`` raised on cache insertion in decorators.
- Add ``Cache.getsize()``.
- Add ``Cache.__missing__()``.
- Feature freeze for `v1.0`.
v0.7.1 (2014-11-22)
===================
- Fix `MANIFEST.in`.
v0.7.0 (2014-11-12)
===================
- Deprecate ``TTLCache.ExpiredError``.
- Add `choice` argument to ``RRCache`` constructor.
- Refactor ``LFUCache``, ``LRUCache`` and ``TTLCache``.
- Use custom ``NullContext`` implementation for unsynchronized
function decorators.
v0.6.0 (2014-10-13)
===================
- Raise ``TTLCache.ExpiredError`` for expired ``TTLCache`` items.
- Support unsynchronized function decorators.
- Allow ``@cachedmethod.cache()`` to return None
v0.5.1 (2014-09-25)
===================
- No formatting of ``KeyError`` arguments.
- Update ``README.rst``.
v0.5.0 (2014-09-23)
===================
- Do not delete expired items in TTLCache.__getitem__().
- Add ``@ttl_cache`` function decorator.
- Fix public ``getsizeof()`` usage.
v0.4.0 (2014-06-16)
===================
- Add ``TTLCache``.
- Add ``Cache`` base class.
- Remove ``@cachedmethod`` `lock` parameter.
v0.3.1 (2014-05-07)
===================
- Add proper locking for ``cache_clear()`` and ``cache_info()``.
- Report `size` in ``cache_info()``.
v0.3.0 (2014-05-06)
===================
- Remove ``@cache`` decorator.
- Add ``size``, ``getsizeof`` members.
- Add ``@cachedmethod`` decorator.
v0.2.0 (2014-04-02)
===================
- Add ``@cache`` decorator.
- Update documentation.
v0.1.0 (2014-03-27)
===================
- Initial release.

View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2014-2024 Thomas Kemmer
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,11 @@
include CHANGELOG.rst
include LICENSE
include MANIFEST.in
include README.rst
include tox.ini
exclude .readthedocs.yaml
recursive-include docs *
prune docs/_build
recursive-include tests *.py

View File

@@ -0,0 +1,125 @@
cachetools
========================================================================
.. image:: https://img.shields.io/pypi/v/cachetools
:target: https://pypi.org/project/cachetools/
:alt: Latest PyPI version
.. image:: https://img.shields.io/github/actions/workflow/status/tkem/cachetools/ci.yml
:target: https://github.com/tkem/cachetools/actions/workflows/ci.yml
:alt: CI build status
.. image:: https://img.shields.io/readthedocs/cachetools
:target: https://cachetools.readthedocs.io/
:alt: Documentation build status
.. image:: https://img.shields.io/codecov/c/github/tkem/cachetools/master.svg
:target: https://codecov.io/gh/tkem/cachetools
:alt: Test coverage
.. image:: https://img.shields.io/librariesio/sourcerank/pypi/cachetools
:target: https://libraries.io/pypi/cachetools
:alt: Libraries.io SourceRank
.. image:: https://img.shields.io/github/license/tkem/cachetools
:target: https://raw.github.com/tkem/cachetools/master/LICENSE
:alt: License
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
This module provides various memoizing collections and decorators,
including variants of the Python Standard Library's `@lru_cache`_
function decorator.
.. code-block:: python
from cachetools import cached, LRUCache, TTLCache
# speed up calculating Fibonacci numbers with dynamic programming
@cached(cache={})
def fib(n):
return n if n < 2 else fib(n - 1) + fib(n - 2)
# cache least recently used Python Enhancement Proposals
@cached(cache=LRUCache(maxsize=32))
def get_pep(num):
url = 'http://www.python.org/dev/peps/pep-%04d/' % num
with urllib.request.urlopen(url) as s:
return s.read()
# cache weather data for no longer than ten minutes
@cached(cache=TTLCache(maxsize=1024, ttl=600))
def get_weather(place):
return owm.weather_at_place(place).get_weather()
For the purpose of this module, a *cache* is a mutable_ mapping_ of a
fixed maximum size. When the cache is full, i.e. by adding another
item the cache would exceed its maximum size, the cache must choose
which item(s) to discard based on a suitable `cache algorithm`_.
This module provides multiple cache classes based on different cache
algorithms, as well as decorators for easily memoizing function and
method calls.
Installation
------------------------------------------------------------------------
cachetools is available from PyPI_ and can be installed by running::
pip install cachetools
Typing stubs for this package are provided by typeshed_ and can be
installed by running::
pip install types-cachetools
Project Resources
------------------------------------------------------------------------
- `Documentation`_
- `Issue tracker`_
- `Source code`_
- `Change log`_
Related Projects
------------------------------------------------------------------------
- asyncache_: Helpers to use cachetools with async functions
- cacheing_: Pure Python Cacheing Library
- CacheToolsUtils_: Cachetools Utilities
- kids.cache_: Kids caching library
- shelved-cache_: Persistent cache for Python cachetools
License
------------------------------------------------------------------------
Copyright (c) 2014-2024 Thomas Kemmer.
Licensed under the `MIT License`_.
.. _@lru_cache: https://docs.python.org/3/library/functools.html#functools.lru_cache
.. _mutable: https://docs.python.org/dev/glossary.html#term-mutable
.. _mapping: https://docs.python.org/dev/glossary.html#term-mapping
.. _cache algorithm: https://en.wikipedia.org/wiki/Cache_algorithms
.. _PyPI: https://pypi.org/project/cachetools/
.. _typeshed: https://github.com/python/typeshed/
.. _Documentation: https://cachetools.readthedocs.io/
.. _Issue tracker: https://github.com/tkem/cachetools/issues/
.. _Source code: https://github.com/tkem/cachetools/
.. _Change log: https://github.com/tkem/cachetools/blob/master/CHANGELOG.rst
.. _MIT License: https://raw.github.com/tkem/cachetools/master/LICENSE
.. _asyncache: https://pypi.org/project/asyncache/
.. _cacheing: https://github.com/breid48/cacheing
.. _CacheToolsUtils: https://pypi.org/project/CacheToolsUtils/
.. _kids.cache: https://pypi.org/project/kids.cache/
.. _shelved-cache: https://pypi.org/project/shelved-cache/

View File

@@ -0,0 +1,33 @@
import pathlib
import sys
src_directory = (pathlib.Path(__file__).parent.parent / "src").resolve()
sys.path.insert(0, str(src_directory))
# Extract the current version from the source.
def get_version():
"""Get the version and release from the source code."""
text = (src_directory / "cachetools/__init__.py").read_text()
for line in text.splitlines():
if not line.strip().startswith("__version__"):
continue
full_version = line.partition("=")[2].strip().strip("\"'")
partial_version = ".".join(full_version.split(".")[:2])
return full_version, partial_version
project = "cachetools"
copyright = "2014-2024 Thomas Kemmer"
release, version = get_version()
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.coverage",
"sphinx.ext.doctest",
"sphinx.ext.todo",
]
exclude_patterns = ["_build"]
master_doc = "index"
html_theme = "classic"

View File

@@ -0,0 +1,716 @@
:tocdepth: 3
*********************************************************************
:mod:`cachetools` --- Extensible memoizing collections and decorators
*********************************************************************
.. module:: cachetools
This module provides various memoizing collections and decorators,
including variants of the Python Standard Library's `@lru_cache`_
function decorator.
For the purpose of this module, a *cache* is a mutable_ mapping_ of a
fixed maximum size. When the cache is full, i.e. by adding another
item the cache would exceed its maximum size, the cache must choose
which item(s) to discard based on a suitable `cache algorithm`_.
This module provides multiple cache classes based on different cache
algorithms, as well as decorators for easily memoizing function and
method calls.
.. testsetup:: *
from cachetools import cached, cachedmethod, LRUCache, TLRUCache, TTLCache
from unittest import mock
urllib = mock.MagicMock()
import time
Cache implementations
=====================
This module provides several classes implementing caches using
different cache algorithms. All these classes derive from class
:class:`Cache`, which in turn derives from
:class:`collections.MutableMapping`, and provide :attr:`maxsize` and
:attr:`currsize` properties to retrieve the maximum and current size
of the cache. When a cache is full, :meth:`Cache.__setitem__()` calls
:meth:`self.popitem()` repeatedly until there is enough room for the
item to be added.
In general, a cache's size is the total size of its item's values.
Therefore, :class:`Cache` provides a :meth:`getsizeof` method, which
returns the size of a given `value`. The default implementation of
:meth:`getsizeof` returns :const:`1` irrespective of its argument,
making the cache's size equal to the number of its items, or
``len(cache)``. For convenience, all cache classes accept an optional
named constructor parameter `getsizeof`, which may specify a function
of one argument used to retrieve the size of an item's value.
Note that the values of a :class:`Cache` are mutable by default, as
are e.g. the values of a :class:`dict`. It is the user's
responsibility to take care that cached values are not accidentally
modified. This is especially important when using a custom
`getsizeof` function, since the size of an item's value will only be
computed when the item is inserted into the cache.
.. note::
Please be aware that all these classes are *not* thread-safe.
Access to a shared cache from multiple threads must be properly
synchronized, e.g. by using one of the memoizing decorators with a
suitable `lock` object.
.. autoclass:: Cache(maxsize, getsizeof=None)
:members: currsize, getsizeof, maxsize
This class discards arbitrary items using :meth:`popitem` to make
space when necessary. Derived classes may override :meth:`popitem`
to implement specific caching strategies. If a subclass has to
keep track of item access, insertion or deletion, it may
additionally need to override :meth:`__getitem__`,
:meth:`__setitem__` and :meth:`__delitem__`.
.. autoclass:: FIFOCache(maxsize, getsizeof=None)
:members: popitem
This class evicts items in the order they were added to make space
when necessary.
.. autoclass:: LFUCache(maxsize, getsizeof=None)
:members: popitem
This class counts how often an item is retrieved, and discards the
items used least often to make space when necessary.
.. autoclass:: LRUCache(maxsize, getsizeof=None)
:members: popitem
This class discards the least recently used items first to make
space when necessary.
.. autoclass:: MRUCache(maxsize, getsizeof=None)
:members: popitem
This class discards the most recently used items first to make
space when necessary.
.. deprecated:: 5.4
`MRUCache` has been deprecated due to lack of use, to reduce
maintenance. Please choose another cache implementation that suits
your needs.
.. autoclass:: RRCache(maxsize, choice=random.choice, getsizeof=None)
:members: choice, popitem
This class randomly selects candidate items and discards them to
make space when necessary.
By default, items are selected from the list of cache keys using
:func:`random.choice`. The optional argument `choice` may specify
an alternative function that returns an arbitrary element from a
non-empty sequence.
.. autoclass:: TTLCache(maxsize, ttl, timer=time.monotonic, getsizeof=None)
:members: popitem, timer, ttl
This class associates a time-to-live value with each item. Items
that expire because they have exceeded their time-to-live will be
no longer accessible, and will be removed eventually. If no
expired items are there to remove, the least recently used items
will be discarded first to make space when necessary.
By default, the time-to-live is specified in seconds and
:func:`time.monotonic` is used to retrieve the current time.
.. testcode::
cache = TTLCache(maxsize=10, ttl=60)
A custom `timer` function can also be supplied, which does not have
to return seconds, or even a numeric value. The expression
`timer() + ttl` at the time of insertion defines the expiration
time of a cache item and must be comparable against later results
of `timer()`, but `ttl` does not necessarily have to be a number,
either.
.. testcode::
from datetime import datetime, timedelta
cache = TTLCache(maxsize=10, ttl=timedelta(hours=12), timer=datetime.now)
.. method:: expire(self, time=None)
Expired items will be removed from a cache only at the next
mutating operation, e.g. :meth:`__setitem__` or
:meth:`__delitem__`, and therefore may still claim memory.
Calling this method removes all items whose time-to-live would
have expired by `time`, so garbage collection is free to reuse
their memory. If `time` is :const:`None`, this removes all
items that have expired by the current value returned by
:attr:`timer`.
:returns: An iterable of expired `(key, value)` pairs.
.. autoclass:: TLRUCache(maxsize, ttu, timer=time.monotonic, getsizeof=None)
:members: popitem, timer, ttu
Similar to :class:`TTLCache`, this class also associates an
expiration time with each item. However, for :class:`TLRUCache`
items, expiration time is calculated by a user-provided time-to-use
(`ttu`) function, which is passed three arguments at the time of
insertion: the new item's key and value, as well as the current
value of `timer()`.
.. testcode::
def my_ttu(_key, value, now):
# assume value.ttu contains the item's time-to-use in seconds
# note that the _key argument is ignored in this example
return now + value.ttu
cache = TLRUCache(maxsize=10, ttu=my_ttu)
The expression `ttu(key, value, timer())` defines the expiration
time of a cache item, and must be comparable against later results
of `timer()`. As with :class:`TTLCache`, a custom `timer` function
can be supplied, which does not have to return a numeric value.
.. testcode::
from datetime import datetime, timedelta
def datetime_ttu(_key, value, now):
# assume now to be of type datetime.datetime, and
# value.hours to contain the item's time-to-use in hours
return now + timedelta(hours=value.hours)
cache = TLRUCache(maxsize=10, ttu=datetime_ttu, timer=datetime.now)
Items that expire because they have exceeded their time-to-use will
be no longer accessible, and will be removed eventually. If no
expired items are there to remove, the least recently used items
will be discarded first to make space when necessary.
.. method:: expire(self, time=None)
Expired items will be removed from a cache only at the next
mutating operation, e.g. :meth:`__setitem__` or
:meth:`__delitem__`, and therefore may still claim memory.
Calling this method removes all items whose time-to-use would
have expired by `time`, so garbage collection is free to reuse
their memory. If `time` is :const:`None`, this removes all
items that have expired by the current value returned by
:attr:`timer`.
:returns: An iterable of expired `(key, value)` pairs.
Extending cache classes
=======================
Sometimes it may be desirable to notice when and what cache items are
evicted, i.e. removed from a cache to make room for new items. Since
all cache implementations call :meth:`popitem` to evict items from the
cache, this can be achieved by overriding this method in a subclass:
.. doctest::
:pyversion: >= 3
>>> class MyCache(LRUCache):
... def popitem(self):
... key, value = super().popitem()
... print('Key "%s" evicted with value "%s"' % (key, value))
... return key, value
>>> c = MyCache(maxsize=2)
>>> c['a'] = 1
>>> c['b'] = 2
>>> c['c'] = 3
Key "a" evicted with value "1"
With :class:`TTLCache` and :class:`TLRUCache`, items may also be
removed after they expire. In this case, :meth:`popitem` will *not*
be called, but :meth:`expire` will be called from the next mutating
operation and will return an iterable of the expired `(key, value)`
pairs. By overrding :meth:`expire`, a subclass will be able to track
expired items:
.. doctest::
:pyversion: >= 3
>>> class ExpCache(TTLCache):
... def expire(self, time=None):
... items = super().expire(time)
... print(f"Expired items: {items}")
... return items
>>> c = ExpCache(maxsize=10, ttl=1.0)
>>> c['a'] = 1
Expired items: []
>>> c['b'] = 2
Expired items: []
>>> time.sleep(1.5)
>>> c['c'] = 3
Expired items: [('a', 1), ('b', 2)]
Similar to the standard library's :class:`collections.defaultdict`,
subclasses of :class:`Cache` may implement a :meth:`__missing__`
method which is called by :meth:`Cache.__getitem__` if the requested
key is not found:
.. doctest::
:pyversion: >= 3
>>> class PepStore(LRUCache):
... def __missing__(self, key):
... """Retrieve text of a Python Enhancement Proposal"""
... url = 'http://www.python.org/dev/peps/pep-%04d/' % key
... with urllib.request.urlopen(url) as s:
... pep = s.read()
... self[key] = pep # store text in cache
... return pep
>>> peps = PepStore(maxsize=4)
>>> for n in 8, 9, 290, 308, 320, 8, 218, 320, 279, 289, 320:
... pep = peps[n]
>>> print(sorted(peps.keys()))
[218, 279, 289, 320]
Note, though, that such a class does not really behave like a *cache*
any more, and will lead to surprising results when used with any of
the memoizing decorators described below. However, it may be useful
in its own right.
Memoizing decorators
====================
The :mod:`cachetools` module provides decorators for memoizing
function and method calls. This can save time when a function is
often called with the same arguments:
.. doctest::
>>> @cached(cache={})
... def fib(n):
... 'Compute the nth number in the Fibonacci sequence'
... return n if n < 2 else fib(n - 1) + fib(n - 2)
>>> fib(42)
267914296
.. decorator:: cached(cache, key=cachetools.keys.hashkey, lock=None, info=False)
Decorator to wrap a function with a memoizing callable that saves
results in a cache.
The `cache` argument specifies a cache object to store previous
function arguments and return values. Note that `cache` need not
be an instance of the cache implementations provided by the
:mod:`cachetools` module. :func:`cached` will work with any
mutable mapping type, including plain :class:`dict` and
:class:`weakref.WeakValueDictionary`.
`key` specifies a function that will be called with the same
positional and keyword arguments as the wrapped function itself,
and which has to return a suitable cache key. Since caches are
mappings, the object returned by `key` must be hashable. The
default is to call :func:`cachetools.keys.hashkey`.
If `lock` is not :const:`None`, it must specify an object
implementing the `context manager`_ protocol. Any access to the
cache will then be nested in a ``with lock:`` statement. This can
be used for synchronizing thread access to the cache by providing a
:class:`threading.Lock` instance, for example.
.. note::
The `lock` context manager is used only to guard access to the
cache object. The underlying wrapped function will be called
outside the `with` statement, and must be thread-safe by itself.
The decorator's `cache`, `key` and `lock` parameters are also
available as :attr:`cache`, :attr:`cache_key` and
:attr:`cache_lock` attributes of the memoizing wrapper function.
These can be used for clearing the cache or invalidating individual
cache items, for example.
.. testcode::
from threading import Lock
# 640K should be enough for anyone...
@cached(cache=LRUCache(maxsize=640*1024, getsizeof=len), lock=Lock())
def get_pep(num):
'Retrieve text of a Python Enhancement Proposal'
url = 'http://www.python.org/dev/peps/pep-%04d/' % num
with urllib.request.urlopen(url) as s:
return s.read()
# make sure access to cache is synchronized
with get_pep.cache_lock:
get_pep.cache.clear()
# always use the key function for accessing cache items
with get_pep.cache_lock:
get_pep.cache.pop(get_pep.cache_key(42), None)
For the common use case of clearing or invalidating the cache, the
decorator also provides a :func:`cache_clear()` function which
takes care of locking automatically, if needed:
.. testcode::
# no need for get_pep.cache_lock here
get_pep.cache_clear()
If `info` is set to :const:`True`, the wrapped function is
instrumented with a :func:`cache_info()` function that returns a
named tuple showing `hits`, `misses`, `maxsize` and `currsize`, to
help measure the effectiveness of the cache.
.. note::
Note that this will inflict a - probably minor - performance
penalty, so it has to be explicitly enabled.
.. doctest::
:pyversion: >= 3
>>> @cached(cache=LRUCache(maxsize=32), info=True)
... def get_pep(num):
... url = 'http://www.python.org/dev/peps/pep-%04d/' % num
... with urllib.request.urlopen(url) as s:
... return s.read()
>>> for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
... pep = get_pep(n)
>>> get_pep.cache_info()
CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)
The original underlying function is accessible through the
:attr:`__wrapped__` attribute. This can be used for introspection
or for bypassing the cache.
It is also possible to use a single shared cache object with
multiple functions. However, care must be taken that different
cache keys are generated for each function, even for identical
function arguments:
.. doctest::
:options: +ELLIPSIS
>>> from cachetools.keys import hashkey
>>> from functools import partial
>>> # shared cache for integer sequences
>>> numcache = {}
>>> # compute Fibonacci numbers
>>> @cached(numcache, key=partial(hashkey, 'fib'))
... def fib(n):
... return n if n < 2 else fib(n - 1) + fib(n - 2)
>>> # compute Lucas numbers
>>> @cached(numcache, key=partial(hashkey, 'luc'))
... def luc(n):
... return 2 - n if n < 2 else luc(n - 1) + luc(n - 2)
>>> fib(42)
267914296
>>> luc(42)
599074578
>>> list(sorted(numcache.items()))
[..., (('fib', 42), 267914296), ..., (('luc', 42), 599074578)]
Function invocations are _not_ cached if any exception are raised.
To cache some (or all) calls raising exceptions, additional
function wrappers may be introduced which wrap exceptions as
regular function results for caching purposes:
.. testcode::
@cached(cache=LRUCache(maxsize=10), info=True)
def _get_pep_wrapped(num):
url = "http://www.python.org/dev/peps/pep-%04d/" % num
try:
with urllib.request.urlopen(url) as s:
return s.read()
except urllib.error.HTTPError as e:
# note that only HTTPError instances are cached
return e
def get_pep(num):
"Retrieve text of a Python Enhancement Proposal"
res = _get_pep_wrapped(num)
if isinstance(res, Exception):
raise res
else:
return res
try:
get_pep(100_000_000)
except Exception as e:
print(e, "-", _get_pep_wrapped.cache_info())
try:
get_pep(100_000_000)
except Exception as e:
print(e, "-", _get_pep_wrapped.cache_info())
.. decorator:: cachedmethod(cache, key=cachetools.keys.methodkey, lock=None)
Decorator to wrap a class or instance method with a memoizing
callable that saves results in a (possibly shared) cache.
The main difference between this and the :func:`cached` function
decorator is that `cache` and `lock` are not passed objects, but
functions. Both will be called with :const:`self` (or :const:`cls`
for class methods) as their sole argument to retrieve the cache or
lock object for the method's respective instance or class.
.. note::
As with :func:`cached`, the context manager obtained by calling
``lock(self)`` will only guard access to the cache itself. It
is the user's responsibility to handle concurrent calls to the
underlying wrapped method in a multithreaded environment.
The `key` function will be called as `key(self, *args, **kwargs)`
to retrieve a suitable cache key. Note that the default `key`
function, :func:`cachetools.keys.methodkey`, ignores its first
argument, i.e. :const:`self`. This has mostly historical reasons,
but also ensures that :const:`self` does not have to be hashable.
You may provide a different `key` function,
e.g. :func:`cachetools.keys.hashkey`, if you need :const:`self` to
be part of the cache key.
One advantage of :func:`cachedmethod` over the :func:`cached`
function decorator is that cache properties such as `maxsize` can
be set at runtime:
.. testcode::
class CachedPEPs:
def __init__(self, cachesize):
self.cache = LRUCache(maxsize=cachesize)
@cachedmethod(lambda self: self.cache)
def get(self, num):
"""Retrieve text of a Python Enhancement Proposal"""
url = 'http://www.python.org/dev/peps/pep-%04d/' % num
with urllib.request.urlopen(url) as s:
return s.read()
peps = CachedPEPs(cachesize=10)
print("PEP #1: %s" % peps.get(1))
.. testoutput::
:hide:
:options: +ELLIPSIS
PEP #1: ...
When using a shared cache for multiple methods, be aware that
different cache keys must be created for each method even when
function arguments are the same, just as with the `@cached`
decorator:
.. testcode::
class CachedReferences:
def __init__(self, cachesize):
self.cache = LRUCache(maxsize=cachesize)
@cachedmethod(lambda self: self.cache, key=partial(hashkey, 'pep'))
def get_pep(self, num):
"""Retrieve text of a Python Enhancement Proposal"""
url = 'http://www.python.org/dev/peps/pep-%04d/' % num
with urllib.request.urlopen(url) as s:
return s.read()
@cachedmethod(lambda self: self.cache, key=partial(hashkey, 'rfc'))
def get_rfc(self, num):
"""Retrieve text of an IETF Request for Comments"""
url = 'https://tools.ietf.org/rfc/rfc%d.txt' % num
with urllib.request.urlopen(url) as s:
return s.read()
docs = CachedReferences(cachesize=100)
print("PEP #1: %s" % docs.get_pep(1))
print("RFC #1: %s" % docs.get_rfc(1))
.. testoutput::
:hide:
:options: +ELLIPSIS
PEP #1: ...
RFC #1: ...
*****************************************************************
:mod:`cachetools.keys` --- Key functions for memoizing decorators
*****************************************************************
.. module:: cachetools.keys
This module provides several functions that can be used as key
functions with the :func:`cached` and :func:`cachedmethod` decorators:
.. autofunction:: hashkey
This function returns a :class:`tuple` instance suitable as a cache
key, provided the positional and keywords arguments are hashable.
.. autofunction:: methodkey
This function is similar to :func:`hashkey`, but ignores its
first positional argument, i.e. `self` when used with the
:func:`cachedmethod` decorator.
.. autofunction:: typedkey
This function is similar to :func:`hashkey`, but arguments of
different types will yield distinct cache keys. For example,
``typedkey(3)`` and ``typedkey(3.0)`` will return different
results.
.. autofunction:: typedmethodkey
This function is similar to :func:`typedkey`, but ignores its
first positional argument, i.e. `self` when used with the
:func:`cachedmethod` decorator.
These functions can also be helpful when implementing custom key
functions for handling some non-hashable arguments. For example,
calling the following function with a dictionary as its `env` argument
will raise a :class:`TypeError`, since :class:`dict` is not hashable::
@cached(LRUCache(maxsize=128))
def foo(x, y, z, env={}):
pass
However, if `env` always holds only hashable values itself, a custom
key function can be written that handles the `env` keyword argument
specially::
def envkey(*args, env={}, **kwargs):
key = hashkey(*args, **kwargs)
key += tuple(sorted(env.items()))
return key
The :func:`envkey` function can then be used in decorator declarations
like this::
@cached(LRUCache(maxsize=128), key=envkey)
def foo(x, y, z, env={}):
pass
foo(1, 2, 3, env=dict(a='a', b='b'))
****************************************************************************
:mod:`cachetools.func` --- :func:`functools.lru_cache` compatible decorators
****************************************************************************
.. module:: cachetools.func
To ease migration from (or to) Python 3's :func:`functools.lru_cache`,
this module provides several memoizing function decorators with a
similar API. All these decorators wrap a function with a memoizing
callable that saves up to the `maxsize` most recent calls, using
different caching strategies. If `maxsize` is set to :const:`None`,
the caching strategy is effectively disabled and the cache can grow
without bound.
If the optional argument `typed` is set to :const:`True`, function
arguments of different types will be cached separately. For example,
``f(3)`` and ``f(3.0)`` will be treated as distinct calls with
distinct results.
If a `user_function` is specified instead, it must be a callable.
This allows the decorator to be applied directly to a user function,
leaving the `maxsize` at its default value of 128::
@cachetools.func.lru_cache
def count_vowels(sentence):
sentence = sentence.casefold()
return sum(sentence.count(vowel) for vowel in 'aeiou')
The wrapped function is instrumented with a :func:`cache_parameters`
function that returns a new :class:`dict` showing the values for
`maxsize` and `typed`. This is for information purposes only.
Mutating the values has no effect.
The wrapped function is also instrumented with :func:`cache_info` and
:func:`cache_clear` functions to provide information about cache
performance and clear the cache. Please see the
:func:`functools.lru_cache` documentation for details. Also note that
all the decorators in this module are thread-safe by default.
.. decorator:: fifo_cache(user_function)
fifo_cache(maxsize=128, typed=False)
Decorator that wraps a function with a memoizing callable that
saves up to `maxsize` results based on a First In First Out
(FIFO) algorithm.
.. decorator:: lfu_cache(user_function)
lfu_cache(maxsize=128, typed=False)
Decorator that wraps a function with a memoizing callable that
saves up to `maxsize` results based on a Least Frequently Used
(LFU) algorithm.
.. decorator:: lru_cache(user_function)
lru_cache(maxsize=128, typed=False)
Decorator that wraps a function with a memoizing callable that
saves up to `maxsize` results based on a Least Recently Used (LRU)
algorithm.
.. decorator:: mru_cache(user_function)
mru_cache(maxsize=128, typed=False)
Decorator that wraps a function with a memoizing callable that
saves up to `maxsize` results based on a Most Recently Used (MRU)
algorithm.
.. deprecated:: 5.4
The `mru_cache` decorator has been deprecated due to lack of use.
Please choose a decorator based on some other algorithm.
.. decorator:: rr_cache(user_function)
rr_cache(maxsize=128, choice=random.choice, typed=False)
Decorator that wraps a function with a memoizing callable that
saves up to `maxsize` results based on a Random Replacement (RR)
algorithm.
.. decorator:: ttl_cache(user_function)
ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False)
Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Recently Used (LRU)
algorithm with a per-item time-to-live (TTL) value.
.. _@lru_cache: http://docs.python.org/3/library/functools.html#functools.lru_cache
.. _cache algorithm: http://en.wikipedia.org/wiki/Cache_algorithms
.. _context manager: http://docs.python.org/dev/glossary.html#term-context-manager
.. _mapping: http://docs.python.org/dev/glossary.html#term-mapping
.. _mutable: http://docs.python.org/dev/glossary.html#term-mutable

View File

@@ -0,0 +1,3 @@
[build-system]
requires = ["setuptools >= 46.4.0", "wheel"]
build-backend = "setuptools.build_meta"

View File

@@ -0,0 +1,48 @@
[metadata]
name = cachetools
version = attr: cachetools.__version__
url = https://github.com/tkem/cachetools/
author = Thomas Kemmer
author_email = tkemmer@computer.org
license = MIT
license_files = LICENSE
description = Extensible memoizing collections and decorators
long_description = file: README.rst
classifiers =
Development Status :: 5 - Production/Stable
Environment :: Other Environment
Intended Audience :: Developers
License :: OSI Approved :: MIT License
Operating System :: OS Independent
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Programming Language :: Python :: 3.12
Programming Language :: Python :: 3.13
Topic :: Software Development :: Libraries :: Python Modules
[options]
package_dir =
= src
packages = find:
python_requires = >= 3.7
[options.packages.find]
where = src
[flake8]
max-line-length = 80
exclude = .git, .tox, build
select = C, E, F, W, B, B950, I, N
# F401: imported but unused (submodule shims)
# E501: line too long (black)
ignore = F401, E501
[build_sphinx]
source-dir = docs/
build-dir = docs/_build
all_files = 1

View File

@@ -0,0 +1,3 @@
from setuptools import setup
setup()

View File

@@ -0,0 +1,859 @@
"""Extensible memoizing collections and decorators."""
__all__ = (
"Cache",
"FIFOCache",
"LFUCache",
"LRUCache",
"MRUCache",
"RRCache",
"TLRUCache",
"TTLCache",
"cached",
"cachedmethod",
)
__version__ = "5.5.1"
import collections
import collections.abc
import functools
import heapq
import random
import time
from . import keys
class _DefaultSize:
__slots__ = ()
def __getitem__(self, _):
return 1
def __setitem__(self, _, value):
assert value == 1
def pop(self, _):
return 1
class Cache(collections.abc.MutableMapping):
"""Mutable mapping to serve as a simple cache or cache base class."""
__marker = object()
__size = _DefaultSize()
def __init__(self, maxsize, getsizeof=None):
if getsizeof:
self.getsizeof = getsizeof
if self.getsizeof is not Cache.getsizeof:
self.__size = dict()
self.__data = dict()
self.__currsize = 0
self.__maxsize = maxsize
def __repr__(self):
return "%s(%s, maxsize=%r, currsize=%r)" % (
self.__class__.__name__,
repr(self.__data),
self.__maxsize,
self.__currsize,
)
def __getitem__(self, key):
try:
return self.__data[key]
except KeyError:
return self.__missing__(key)
def __setitem__(self, key, value):
maxsize = self.__maxsize
size = self.getsizeof(value)
if size > maxsize:
raise ValueError("value too large")
if key not in self.__data or self.__size[key] < size:
while self.__currsize + size > maxsize:
self.popitem()
if key in self.__data:
diffsize = size - self.__size[key]
else:
diffsize = size
self.__data[key] = value
self.__size[key] = size
self.__currsize += diffsize
def __delitem__(self, key):
size = self.__size.pop(key)
del self.__data[key]
self.__currsize -= size
def __contains__(self, key):
return key in self.__data
def __missing__(self, key):
raise KeyError(key)
def __iter__(self):
return iter(self.__data)
def __len__(self):
return len(self.__data)
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def pop(self, key, default=__marker):
if key in self:
value = self[key]
del self[key]
elif default is self.__marker:
raise KeyError(key)
else:
value = default
return value
def setdefault(self, key, default=None):
if key in self:
value = self[key]
else:
self[key] = value = default
return value
@property
def maxsize(self):
"""The maximum size of the cache."""
return self.__maxsize
@property
def currsize(self):
"""The current size of the cache."""
return self.__currsize
@staticmethod
def getsizeof(value):
"""Return the size of a cache element's value."""
return 1
class FIFOCache(Cache):
"""First In First Out (FIFO) cache implementation."""
def __init__(self, maxsize, getsizeof=None):
Cache.__init__(self, maxsize, getsizeof)
self.__order = collections.OrderedDict()
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
cache_setitem(self, key, value)
try:
self.__order.move_to_end(key)
except KeyError:
self.__order[key] = None
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
cache_delitem(self, key)
del self.__order[key]
def popitem(self):
"""Remove and return the `(key, value)` pair first inserted."""
try:
key = next(iter(self.__order))
except StopIteration:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
class LFUCache(Cache):
"""Least Frequently Used (LFU) cache implementation."""
def __init__(self, maxsize, getsizeof=None):
Cache.__init__(self, maxsize, getsizeof)
self.__counter = collections.Counter()
def __getitem__(self, key, cache_getitem=Cache.__getitem__):
value = cache_getitem(self, key)
if key in self: # __missing__ may not store item
self.__counter[key] -= 1
return value
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
cache_setitem(self, key, value)
self.__counter[key] -= 1
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
cache_delitem(self, key)
del self.__counter[key]
def popitem(self):
"""Remove and return the `(key, value)` pair least frequently used."""
try:
((key, _),) = self.__counter.most_common(1)
except ValueError:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
class LRUCache(Cache):
"""Least Recently Used (LRU) cache implementation."""
def __init__(self, maxsize, getsizeof=None):
Cache.__init__(self, maxsize, getsizeof)
self.__order = collections.OrderedDict()
def __getitem__(self, key, cache_getitem=Cache.__getitem__):
value = cache_getitem(self, key)
if key in self: # __missing__ may not store item
self.__update(key)
return value
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
cache_setitem(self, key, value)
self.__update(key)
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
cache_delitem(self, key)
del self.__order[key]
def popitem(self):
"""Remove and return the `(key, value)` pair least recently used."""
try:
key = next(iter(self.__order))
except StopIteration:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
def __update(self, key):
try:
self.__order.move_to_end(key)
except KeyError:
self.__order[key] = None
class MRUCache(Cache):
"""Most Recently Used (MRU) cache implementation."""
def __init__(self, maxsize, getsizeof=None):
from warnings import warn
warn("MRUCache is deprecated", DeprecationWarning, stacklevel=2)
Cache.__init__(self, maxsize, getsizeof)
self.__order = collections.OrderedDict()
def __getitem__(self, key, cache_getitem=Cache.__getitem__):
value = cache_getitem(self, key)
if key in self: # __missing__ may not store item
self.__update(key)
return value
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
cache_setitem(self, key, value)
self.__update(key)
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
cache_delitem(self, key)
del self.__order[key]
def popitem(self):
"""Remove and return the `(key, value)` pair most recently used."""
try:
key = next(iter(self.__order))
except StopIteration:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
def __update(self, key):
try:
self.__order.move_to_end(key, last=False)
except KeyError:
self.__order[key] = None
class RRCache(Cache):
"""Random Replacement (RR) cache implementation."""
def __init__(self, maxsize, choice=random.choice, getsizeof=None):
Cache.__init__(self, maxsize, getsizeof)
self.__choice = choice
@property
def choice(self):
"""The `choice` function used by the cache."""
return self.__choice
def popitem(self):
"""Remove and return a random `(key, value)` pair."""
try:
key = self.__choice(list(self))
except IndexError:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
class _TimedCache(Cache):
"""Base class for time aware cache implementations."""
class _Timer:
def __init__(self, timer):
self.__timer = timer
self.__nesting = 0
def __call__(self):
if self.__nesting == 0:
return self.__timer()
else:
return self.__time
def __enter__(self):
if self.__nesting == 0:
self.__time = time = self.__timer()
else:
time = self.__time
self.__nesting += 1
return time
def __exit__(self, *exc):
self.__nesting -= 1
def __reduce__(self):
return _TimedCache._Timer, (self.__timer,)
def __getattr__(self, name):
return getattr(self.__timer, name)
def __init__(self, maxsize, timer=time.monotonic, getsizeof=None):
Cache.__init__(self, maxsize, getsizeof)
self.__timer = _TimedCache._Timer(timer)
def __repr__(self, cache_repr=Cache.__repr__):
with self.__timer as time:
self.expire(time)
return cache_repr(self)
def __len__(self, cache_len=Cache.__len__):
with self.__timer as time:
self.expire(time)
return cache_len(self)
@property
def currsize(self):
with self.__timer as time:
self.expire(time)
return super().currsize
@property
def timer(self):
"""The timer function used by the cache."""
return self.__timer
def clear(self):
with self.__timer as time:
self.expire(time)
Cache.clear(self)
def get(self, *args, **kwargs):
with self.__timer:
return Cache.get(self, *args, **kwargs)
def pop(self, *args, **kwargs):
with self.__timer:
return Cache.pop(self, *args, **kwargs)
def setdefault(self, *args, **kwargs):
with self.__timer:
return Cache.setdefault(self, *args, **kwargs)
class TTLCache(_TimedCache):
"""LRU Cache implementation with per-item time-to-live (TTL) value."""
class _Link:
__slots__ = ("key", "expires", "next", "prev")
def __init__(self, key=None, expires=None):
self.key = key
self.expires = expires
def __reduce__(self):
return TTLCache._Link, (self.key, self.expires)
def unlink(self):
next = self.next
prev = self.prev
prev.next = next
next.prev = prev
def __init__(self, maxsize, ttl, timer=time.monotonic, getsizeof=None):
_TimedCache.__init__(self, maxsize, timer, getsizeof)
self.__root = root = TTLCache._Link()
root.prev = root.next = root
self.__links = collections.OrderedDict()
self.__ttl = ttl
def __contains__(self, key):
try:
link = self.__links[key] # no reordering
except KeyError:
return False
else:
return self.timer() < link.expires
def __getitem__(self, key, cache_getitem=Cache.__getitem__):
try:
link = self.__getlink(key)
except KeyError:
expired = False
else:
expired = not (self.timer() < link.expires)
if expired:
return self.__missing__(key)
else:
return cache_getitem(self, key)
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
with self.timer as time:
self.expire(time)
cache_setitem(self, key, value)
try:
link = self.__getlink(key)
except KeyError:
self.__links[key] = link = TTLCache._Link(key)
else:
link.unlink()
link.expires = time + self.__ttl
link.next = root = self.__root
link.prev = prev = root.prev
prev.next = root.prev = link
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
cache_delitem(self, key)
link = self.__links.pop(key)
link.unlink()
if not (self.timer() < link.expires):
raise KeyError(key)
def __iter__(self):
root = self.__root
curr = root.next
while curr is not root:
# "freeze" time for iterator access
with self.timer as time:
if time < curr.expires:
yield curr.key
curr = curr.next
def __setstate__(self, state):
self.__dict__.update(state)
root = self.__root
root.prev = root.next = root
for link in sorted(self.__links.values(), key=lambda obj: obj.expires):
link.next = root
link.prev = prev = root.prev
prev.next = root.prev = link
self.expire(self.timer())
@property
def ttl(self):
"""The time-to-live value of the cache's items."""
return self.__ttl
def expire(self, time=None):
"""Remove expired items from the cache and return an iterable of the
expired `(key, value)` pairs.
"""
if time is None:
time = self.timer()
root = self.__root
curr = root.next
links = self.__links
expired = []
cache_delitem = Cache.__delitem__
cache_getitem = Cache.__getitem__
while curr is not root and not (time < curr.expires):
expired.append((curr.key, cache_getitem(self, curr.key)))
cache_delitem(self, curr.key)
del links[curr.key]
next = curr.next
curr.unlink()
curr = next
return expired
def popitem(self):
"""Remove and return the `(key, value)` pair least recently used that
has not already expired.
"""
with self.timer as time:
self.expire(time)
try:
key = next(iter(self.__links))
except StopIteration:
raise KeyError("%s is empty" % type(self).__name__) from None
else:
return (key, self.pop(key))
def __getlink(self, key):
value = self.__links[key]
self.__links.move_to_end(key)
return value
class TLRUCache(_TimedCache):
"""Time aware Least Recently Used (TLRU) cache implementation."""
@functools.total_ordering
class _Item:
__slots__ = ("key", "expires", "removed")
def __init__(self, key=None, expires=None):
self.key = key
self.expires = expires
self.removed = False
def __lt__(self, other):
return self.expires < other.expires
def __init__(self, maxsize, ttu, timer=time.monotonic, getsizeof=None):
_TimedCache.__init__(self, maxsize, timer, getsizeof)
self.__items = collections.OrderedDict()
self.__order = []
self.__ttu = ttu
def __contains__(self, key):
try:
item = self.__items[key] # no reordering
except KeyError:
return False
else:
return self.timer() < item.expires
def __getitem__(self, key, cache_getitem=Cache.__getitem__):
try:
item = self.__getitem(key)
except KeyError:
expired = False
else:
expired = not (self.timer() < item.expires)
if expired:
return self.__missing__(key)
else:
return cache_getitem(self, key)
def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
with self.timer as time:
expires = self.__ttu(key, value, time)
if not (time < expires):
return # skip expired items
self.expire(time)
cache_setitem(self, key, value)
# removing an existing item would break the heap structure, so
# only mark it as removed for now
try:
self.__getitem(key).removed = True
except KeyError:
pass
self.__items[key] = item = TLRUCache._Item(key, expires)
heapq.heappush(self.__order, item)
def __delitem__(self, key, cache_delitem=Cache.__delitem__):
with self.timer as time:
# no self.expire() for performance reasons, e.g. self.clear() [#67]
cache_delitem(self, key)
item = self.__items.pop(key)
item.removed = True
if not (time < item.expires):
raise KeyError(key)
def __iter__(self):
for curr in self.__order:
# "freeze" time for iterator access
with self.timer as time:
if time < curr.expires and not curr.removed:
yield curr.key
@property
def ttu(self):
"""The local time-to-use function used by the cache."""
return self.__ttu
def expire(self, time=None):
"""Remove expired items from the cache and return an iterable of the
expired `(key, value)` pairs.
"""
if time is None:
time = self.timer()
items = self.__items
order = self.__order
# clean up the heap if too many items are marked as removed
if len(order) > len(items) * 2:
self.__order = order = [item for item in order if not item.removed]
heapq.heapify(order)
expired = []
cache_delitem = Cache.__delitem__
cache_getitem = Cache.__getitem__
while order and (order[0].removed or not (time < order[0].expires)):
item = heapq.heappop(order)
if not item.removed:
expired.append((item.key, cache_getitem(self, item.key)))
cache_delitem(self, item.key)
del items[item.key]
return expired
def popitem(self):
"""Remove and return the `(key, value)` pair least recently used that
has not already expired.
"""
with self.timer as time:
self.expire(time)
try:
key = next(iter(self.__items))
except StopIteration:
raise KeyError("%s is empty" % self.__class__.__name__) from None
else:
return (key, self.pop(key))
def __getitem(self, key):
value = self.__items[key]
self.__items.move_to_end(key)
return value
_CacheInfo = collections.namedtuple(
"CacheInfo", ["hits", "misses", "maxsize", "currsize"]
)
def cached(cache, key=keys.hashkey, lock=None, info=False):
"""Decorator to wrap a function with a memoizing callable that saves
results in a cache.
"""
def decorator(func):
if info:
hits = misses = 0
if isinstance(cache, Cache):
def getinfo():
nonlocal hits, misses
return _CacheInfo(hits, misses, cache.maxsize, cache.currsize)
elif isinstance(cache, collections.abc.Mapping):
def getinfo():
nonlocal hits, misses
return _CacheInfo(hits, misses, None, len(cache))
else:
def getinfo():
nonlocal hits, misses
return _CacheInfo(hits, misses, 0, 0)
if cache is None:
def wrapper(*args, **kwargs):
nonlocal misses
misses += 1
return func(*args, **kwargs)
def cache_clear():
nonlocal hits, misses
hits = misses = 0
cache_info = getinfo
elif lock is None:
def wrapper(*args, **kwargs):
nonlocal hits, misses
k = key(*args, **kwargs)
try:
result = cache[k]
hits += 1
return result
except KeyError:
misses += 1
v = func(*args, **kwargs)
try:
cache[k] = v
except ValueError:
pass # value too large
return v
def cache_clear():
nonlocal hits, misses
cache.clear()
hits = misses = 0
cache_info = getinfo
else:
def wrapper(*args, **kwargs):
nonlocal hits, misses
k = key(*args, **kwargs)
try:
with lock:
result = cache[k]
hits += 1
return result
except KeyError:
with lock:
misses += 1
v = func(*args, **kwargs)
# in case of a race, prefer the item already in the cache
try:
with lock:
return cache.setdefault(k, v)
except ValueError:
return v # value too large
def cache_clear():
nonlocal hits, misses
with lock:
cache.clear()
hits = misses = 0
def cache_info():
with lock:
return getinfo()
else:
if cache is None:
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
def cache_clear():
pass
elif lock is None:
def wrapper(*args, **kwargs):
k = key(*args, **kwargs)
try:
return cache[k]
except KeyError:
pass # key not found
v = func(*args, **kwargs)
try:
cache[k] = v
except ValueError:
pass # value too large
return v
def cache_clear():
cache.clear()
else:
def wrapper(*args, **kwargs):
k = key(*args, **kwargs)
try:
with lock:
return cache[k]
except KeyError:
pass # key not found
v = func(*args, **kwargs)
# in case of a race, prefer the item already in the cache
try:
with lock:
return cache.setdefault(k, v)
except ValueError:
return v # value too large
def cache_clear():
with lock:
cache.clear()
cache_info = None
wrapper.cache = cache
wrapper.cache_key = key
wrapper.cache_lock = lock
wrapper.cache_clear = cache_clear
wrapper.cache_info = cache_info
return functools.update_wrapper(wrapper, func)
return decorator
def cachedmethod(cache, key=keys.methodkey, lock=None):
"""Decorator to wrap a class or instance method with a memoizing
callable that saves results in a cache.
"""
def decorator(method):
if lock is None:
def wrapper(self, *args, **kwargs):
c = cache(self)
if c is None:
return method(self, *args, **kwargs)
k = key(self, *args, **kwargs)
try:
return c[k]
except KeyError:
pass # key not found
v = method(self, *args, **kwargs)
try:
c[k] = v
except ValueError:
pass # value too large
return v
def clear(self):
c = cache(self)
if c is not None:
c.clear()
else:
def wrapper(self, *args, **kwargs):
c = cache(self)
if c is None:
return method(self, *args, **kwargs)
k = key(self, *args, **kwargs)
try:
with lock(self):
return c[k]
except KeyError:
pass # key not found
v = method(self, *args, **kwargs)
# in case of a race, prefer the item already in the cache
try:
with lock(self):
return c.setdefault(k, v)
except ValueError:
return v # value too large
def clear(self):
c = cache(self)
if c is not None:
with lock(self):
c.clear()
wrapper.cache = cache
wrapper.cache_key = key
wrapper.cache_lock = lock
wrapper.cache_clear = clear
return functools.update_wrapper(wrapper, method)
return decorator

View File

@@ -0,0 +1,121 @@
"""`functools.lru_cache` compatible memoizing function decorators."""
__all__ = ("fifo_cache", "lfu_cache", "lru_cache", "mru_cache", "rr_cache", "ttl_cache")
import math
import random
import time
try:
from threading import RLock
except ImportError: # pragma: no cover
from dummy_threading import RLock
from . import FIFOCache, LFUCache, LRUCache, MRUCache, RRCache, TTLCache
from . import cached
from . import keys
class _UnboundTTLCache(TTLCache):
def __init__(self, ttl, timer):
TTLCache.__init__(self, math.inf, ttl, timer)
@property
def maxsize(self):
return None
def _cache(cache, maxsize, typed):
def decorator(func):
key = keys.typedkey if typed else keys.hashkey
wrapper = cached(cache=cache, key=key, lock=RLock(), info=True)(func)
wrapper.cache_parameters = lambda: {"maxsize": maxsize, "typed": typed}
return wrapper
return decorator
def fifo_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a First In First Out (FIFO)
algorithm.
"""
if maxsize is None:
return _cache({}, None, typed)
elif callable(maxsize):
return _cache(FIFOCache(128), 128, typed)(maxsize)
else:
return _cache(FIFOCache(maxsize), maxsize, typed)
def lfu_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Frequently Used (LFU)
algorithm.
"""
if maxsize is None:
return _cache({}, None, typed)
elif callable(maxsize):
return _cache(LFUCache(128), 128, typed)(maxsize)
else:
return _cache(LFUCache(maxsize), maxsize, typed)
def lru_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Recently Used (LRU)
algorithm.
"""
if maxsize is None:
return _cache({}, None, typed)
elif callable(maxsize):
return _cache(LRUCache(128), 128, typed)(maxsize)
else:
return _cache(LRUCache(maxsize), maxsize, typed)
def mru_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Most Recently Used (MRU)
algorithm.
"""
from warnings import warn
warn("@mru_cache is deprecated", DeprecationWarning, stacklevel=2)
if maxsize is None:
return _cache({}, None, typed)
elif callable(maxsize):
return _cache(MRUCache(128), 128, typed)(maxsize)
else:
return _cache(MRUCache(maxsize), maxsize, typed)
def rr_cache(maxsize=128, choice=random.choice, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Random Replacement (RR)
algorithm.
"""
if maxsize is None:
return _cache({}, None, typed)
elif callable(maxsize):
return _cache(RRCache(128, choice), 128, typed)(maxsize)
else:
return _cache(RRCache(maxsize, choice), maxsize, typed)
def ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Recently Used (LRU)
algorithm with a per-item time-to-live (TTL) value.
"""
if maxsize is None:
return _cache(_UnboundTTLCache(ttl, timer), None, typed)
elif callable(maxsize):
return _cache(TTLCache(128, ttl, timer), 128, typed)(maxsize)
else:
return _cache(TTLCache(maxsize, ttl, timer), maxsize, typed)

View File

@@ -0,0 +1,62 @@
"""Key functions for memoizing decorators."""
__all__ = ("hashkey", "methodkey", "typedkey", "typedmethodkey")
class _HashedTuple(tuple):
"""A tuple that ensures that hash() will be called no more than once
per element, since cache decorators will hash the key multiple
times on a cache miss. See also _HashedSeq in the standard
library functools implementation.
"""
__hashvalue = None
def __hash__(self, hash=tuple.__hash__):
hashvalue = self.__hashvalue
if hashvalue is None:
self.__hashvalue = hashvalue = hash(self)
return hashvalue
def __add__(self, other, add=tuple.__add__):
return _HashedTuple(add(self, other))
def __radd__(self, other, add=tuple.__add__):
return _HashedTuple(add(other, self))
def __getstate__(self):
return {}
# used for separating keyword arguments; we do not use an object
# instance here so identity is preserved when pickling/unpickling
_kwmark = (_HashedTuple,)
def hashkey(*args, **kwargs):
"""Return a cache key for the specified hashable arguments."""
if kwargs:
return _HashedTuple(args + sum(sorted(kwargs.items()), _kwmark))
else:
return _HashedTuple(args)
def methodkey(self, *args, **kwargs):
"""Return a cache key for use with cached methods."""
return hashkey(*args, **kwargs)
def typedkey(*args, **kwargs):
"""Return a typed cache key for the specified hashable arguments."""
key = hashkey(*args, **kwargs)
key += tuple(type(v) for v in args)
key += tuple(type(v) for _, v in sorted(kwargs.items()))
return key
def typedmethodkey(self, *args, **kwargs):
"""Return a typed cache key for use with cached methods."""
return typedkey(*args, **kwargs)

View File

@@ -0,0 +1,301 @@
import unittest
class CacheTestMixin:
Cache = None
def test_defaults(self):
cache = self.Cache(maxsize=1)
self.assertEqual(0, len(cache))
self.assertEqual(1, cache.maxsize)
self.assertEqual(0, cache.currsize)
self.assertEqual(1, cache.getsizeof(None))
self.assertEqual(1, cache.getsizeof(""))
self.assertEqual(1, cache.getsizeof(0))
self.assertTrue(repr(cache).startswith(cache.__class__.__name__))
def test_insert(self):
cache = self.Cache(maxsize=2)
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache[3] = 3
self.assertEqual(2, len(cache))
self.assertEqual(3, cache[3])
self.assertTrue(1 in cache or 2 in cache)
cache[4] = 4
self.assertEqual(2, len(cache))
self.assertEqual(4, cache[4])
self.assertTrue(1 in cache or 2 in cache or 3 in cache)
def test_update(self):
cache = self.Cache(maxsize=2)
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache.update({1: "a", 2: "b"})
self.assertEqual(2, len(cache))
self.assertEqual("a", cache[1])
self.assertEqual("b", cache[2])
def test_delete(self):
cache = self.Cache(maxsize=2)
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
del cache[2]
self.assertEqual(1, len(cache))
self.assertEqual(1, cache[1])
self.assertNotIn(2, cache)
del cache[1]
self.assertEqual(0, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
with self.assertRaises(KeyError):
del cache[1]
self.assertEqual(0, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
def test_pop(self):
cache = self.Cache(maxsize=2)
cache.update({1: 1, 2: 2})
self.assertEqual(2, cache.pop(2))
self.assertEqual(1, len(cache))
self.assertEqual(1, cache.pop(1))
self.assertEqual(0, len(cache))
with self.assertRaises(KeyError):
cache.pop(2)
with self.assertRaises(KeyError):
cache.pop(1)
with self.assertRaises(KeyError):
cache.pop(0)
self.assertEqual(None, cache.pop(2, None))
self.assertEqual(None, cache.pop(1, None))
self.assertEqual(None, cache.pop(0, None))
def test_popitem(self):
cache = self.Cache(maxsize=2)
cache.update({1: 1, 2: 2})
self.assertIn(cache.pop(1), {1: 1, 2: 2})
self.assertEqual(1, len(cache))
self.assertIn(cache.pop(2), {1: 1, 2: 2})
self.assertEqual(0, len(cache))
with self.assertRaises(KeyError):
cache.popitem()
def test_popitem_exception_context(self):
# since Python 3.7, MutableMapping.popitem() suppresses
# exception context as implementation detail
exception = None
try:
self.Cache(maxsize=2).popitem()
except Exception as e:
exception = e
self.assertIsNone(exception.__cause__)
self.assertTrue(exception.__suppress_context__)
def test_missing(self):
class DefaultCache(self.Cache):
def __missing__(self, key):
self[key] = key
return key
cache = DefaultCache(maxsize=2)
self.assertEqual(0, cache.currsize)
self.assertEqual(2, cache.maxsize)
self.assertEqual(0, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(2, len(cache))
self.assertTrue(1 in cache and 2 in cache)
self.assertEqual(3, cache[3])
self.assertEqual(2, len(cache))
self.assertTrue(3 in cache)
self.assertTrue(1 in cache or 2 in cache)
self.assertTrue(1 not in cache or 2 not in cache)
self.assertEqual(4, cache[4])
self.assertEqual(2, len(cache))
self.assertTrue(4 in cache)
self.assertTrue(1 in cache or 2 in cache or 3 in cache)
# verify __missing__() is *not* called for any operations
# besides __getitem__()
self.assertEqual(4, cache.get(4))
self.assertEqual(None, cache.get(5))
self.assertEqual(5 * 5, cache.get(5, 5 * 5))
self.assertEqual(2, len(cache))
self.assertEqual(4, cache.pop(4))
with self.assertRaises(KeyError):
cache.pop(5)
self.assertEqual(None, cache.pop(5, None))
self.assertEqual(5 * 5, cache.pop(5, 5 * 5))
self.assertEqual(1, len(cache))
cache.clear()
cache[1] = 1 + 1
self.assertEqual(1 + 1, cache.setdefault(1))
self.assertEqual(1 + 1, cache.setdefault(1, 1))
self.assertEqual(1 + 1, cache[1])
self.assertEqual(2 + 2, cache.setdefault(2, 2 + 2))
self.assertEqual(2 + 2, cache.setdefault(2, None))
self.assertEqual(2 + 2, cache.setdefault(2))
self.assertEqual(2 + 2, cache[2])
self.assertEqual(2, len(cache))
self.assertTrue(1 in cache and 2 in cache)
self.assertEqual(None, cache.setdefault(3))
self.assertEqual(2, len(cache))
self.assertTrue(3 in cache)
self.assertTrue(1 in cache or 2 in cache)
self.assertTrue(1 not in cache or 2 not in cache)
def test_missing_getsizeof(self):
class DefaultCache(self.Cache):
def __missing__(self, key):
try:
self[key] = key
except ValueError:
pass # not stored
return key
cache = DefaultCache(maxsize=2, getsizeof=lambda x: x)
self.assertEqual(0, cache.currsize)
self.assertEqual(2, cache.maxsize)
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual(1, cache.currsize)
self.assertIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(1, len(cache))
self.assertEqual(2, cache.currsize)
self.assertNotIn(1, cache)
self.assertIn(2, cache)
self.assertEqual(3, cache[3]) # not stored
self.assertEqual(1, len(cache))
self.assertEqual(2, cache.currsize)
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual(1, cache.currsize)
self.assertEqual((1, 1), cache.popitem())
def _test_getsizeof(self, cache):
self.assertEqual(0, cache.currsize)
self.assertEqual(3, cache.maxsize)
self.assertEqual(1, cache.getsizeof(1))
self.assertEqual(2, cache.getsizeof(2))
self.assertEqual(3, cache.getsizeof(3))
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(3, cache.currsize)
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache[1] = 2
self.assertEqual(1, len(cache))
self.assertEqual(2, cache.currsize)
self.assertEqual(2, cache[1])
self.assertNotIn(2, cache)
cache.update({1: 1, 2: 2})
self.assertEqual(2, len(cache))
self.assertEqual(3, cache.currsize)
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache[3] = 3
self.assertEqual(1, len(cache))
self.assertEqual(3, cache.currsize)
self.assertEqual(3, cache[3])
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
with self.assertRaises(ValueError):
cache[3] = 4
self.assertEqual(1, len(cache))
self.assertEqual(3, cache.currsize)
self.assertEqual(3, cache[3])
with self.assertRaises(ValueError):
cache[4] = 4
self.assertEqual(1, len(cache))
self.assertEqual(3, cache.currsize)
self.assertEqual(3, cache[3])
def test_getsizeof_param(self):
self._test_getsizeof(self.Cache(maxsize=3, getsizeof=lambda x: x))
def test_getsizeof_subclass(self):
class Cache(self.Cache):
def getsizeof(self, value):
return value
self._test_getsizeof(Cache(maxsize=3))
def test_pickle(self):
import pickle
source = self.Cache(maxsize=2)
source.update({1: 1, 2: 2})
cache = pickle.loads(pickle.dumps(source))
self.assertEqual(source, cache)
self.assertEqual(2, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
cache[3] = 3
self.assertEqual(2, len(cache))
self.assertEqual(3, cache[3])
self.assertTrue(1 in cache or 2 in cache)
cache[4] = 4
self.assertEqual(2, len(cache))
self.assertEqual(4, cache[4])
self.assertTrue(1 in cache or 2 in cache or 3 in cache)
self.assertEqual(cache, pickle.loads(pickle.dumps(cache)))
def test_pickle_maxsize(self):
import pickle
import sys
# test empty cache, single element, large cache (recursion limit)
for n in [0, 1, sys.getrecursionlimit() * 2]:
source = self.Cache(maxsize=n)
source.update((i, i) for i in range(n))
cache = pickle.loads(pickle.dumps(source))
self.assertEqual(n, len(cache))
self.assertEqual(source, cache)

View File

@@ -0,0 +1,9 @@
import unittest
import cachetools
from . import CacheTestMixin
class CacheTest(unittest.TestCase, CacheTestMixin):
Cache = cachetools.Cache

View File

@@ -0,0 +1,245 @@
import unittest
import cachetools
import cachetools.keys
class CountedLock:
def __init__(self):
self.count = 0
def __enter__(self):
self.count += 1
def __exit__(self, *exc):
pass
class DecoratorTestMixin:
def cache(self, minsize):
raise NotImplementedError
def func(self, *args, **kwargs):
if hasattr(self, "count"):
self.count += 1
else:
self.count = 0
return self.count
def test_decorator(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache)(self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 1)
self.assertIn(cachetools.keys.hashkey(0), cache)
self.assertNotIn(cachetools.keys.hashkey(1), cache)
self.assertNotIn(cachetools.keys.hashkey(1.0), cache)
self.assertEqual(wrapper(1), 1)
self.assertEqual(len(cache), 2)
self.assertIn(cachetools.keys.hashkey(0), cache)
self.assertIn(cachetools.keys.hashkey(1), cache)
self.assertIn(cachetools.keys.hashkey(1.0), cache)
self.assertEqual(wrapper(1), 1)
self.assertEqual(len(cache), 2)
self.assertEqual(wrapper(1.0), 1)
self.assertEqual(len(cache), 2)
self.assertEqual(wrapper(1.0), 1)
self.assertEqual(len(cache), 2)
def test_decorator_typed(self):
cache = self.cache(3)
key = cachetools.keys.typedkey
wrapper = cachetools.cached(cache, key=key)(self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 1)
self.assertIn(cachetools.keys.typedkey(0), cache)
self.assertNotIn(cachetools.keys.typedkey(1), cache)
self.assertNotIn(cachetools.keys.typedkey(1.0), cache)
self.assertEqual(wrapper(1), 1)
self.assertEqual(len(cache), 2)
self.assertIn(cachetools.keys.typedkey(0), cache)
self.assertIn(cachetools.keys.typedkey(1), cache)
self.assertNotIn(cachetools.keys.typedkey(1.0), cache)
self.assertEqual(wrapper(1), 1)
self.assertEqual(len(cache), 2)
self.assertEqual(wrapper(1.0), 2)
self.assertEqual(len(cache), 3)
self.assertIn(cachetools.keys.typedkey(0), cache)
self.assertIn(cachetools.keys.typedkey(1), cache)
self.assertIn(cachetools.keys.typedkey(1.0), cache)
self.assertEqual(wrapper(1.0), 2)
self.assertEqual(len(cache), 3)
def test_decorator_lock(self):
cache = self.cache(2)
lock = CountedLock()
wrapper = cachetools.cached(cache, lock=lock)(self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 0)
self.assertEqual(lock.count, 2)
self.assertEqual(wrapper(1), 1)
self.assertEqual(lock.count, 4)
self.assertEqual(wrapper(1), 1)
self.assertEqual(lock.count, 5)
def test_decorator_wrapped(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache)(self.func)
self.assertEqual(wrapper.__wrapped__, self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper.__wrapped__(0), 0)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 1)
self.assertEqual(len(cache), 1)
self.assertEqual(wrapper(0), 1)
self.assertEqual(len(cache), 1)
def test_decorator_attributes(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache)(self.func)
self.assertIs(wrapper.cache, cache)
self.assertIs(wrapper.cache_key, cachetools.keys.hashkey)
self.assertIs(wrapper.cache_lock, None)
def test_decorator_attributes_lock(self):
cache = self.cache(2)
lock = CountedLock()
wrapper = cachetools.cached(cache, lock=lock)(self.func)
self.assertIs(wrapper.cache, cache)
self.assertIs(wrapper.cache_key, cachetools.keys.hashkey)
self.assertIs(wrapper.cache_lock, lock)
def test_decorator_clear(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache)(self.func)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 1)
wrapper.cache_clear()
self.assertEqual(len(cache), 0)
def test_decorator_clear_lock(self):
cache = self.cache(2)
lock = CountedLock()
wrapper = cachetools.cached(cache, lock=lock)(self.func)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 1)
self.assertEqual(lock.count, 2)
wrapper.cache_clear()
self.assertEqual(len(cache), 0)
self.assertEqual(lock.count, 3)
class CacheWrapperTest(unittest.TestCase, DecoratorTestMixin):
def cache(self, minsize):
return cachetools.Cache(maxsize=minsize)
def test_decorator_info(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache, info=True)(self.func)
self.assertEqual(wrapper.cache_info(), (0, 0, 2, 0))
self.assertEqual(wrapper(0), 0)
self.assertEqual(wrapper.cache_info(), (0, 1, 2, 1))
self.assertEqual(wrapper(1), 1)
self.assertEqual(wrapper.cache_info(), (0, 2, 2, 2))
self.assertEqual(wrapper(0), 0)
self.assertEqual(wrapper.cache_info(), (1, 2, 2, 2))
wrapper.cache_clear()
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper.cache_info(), (0, 0, 2, 0))
def test_zero_size_cache_decorator(self):
cache = self.cache(0)
wrapper = cachetools.cached(cache)(self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 0)
def test_zero_size_cache_decorator_lock(self):
cache = self.cache(0)
lock = CountedLock()
wrapper = cachetools.cached(cache, lock=lock)(self.func)
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper(0), 0)
self.assertEqual(len(cache), 0)
self.assertEqual(lock.count, 2)
def test_zero_size_cache_decorator_info(self):
cache = self.cache(0)
wrapper = cachetools.cached(cache, info=True)(self.func)
self.assertEqual(wrapper.cache_info(), (0, 0, 0, 0))
self.assertEqual(wrapper(0), 0)
self.assertEqual(wrapper.cache_info(), (0, 1, 0, 0))
class DictWrapperTest(unittest.TestCase, DecoratorTestMixin):
def cache(self, minsize):
return dict()
def test_decorator_info(self):
cache = self.cache(2)
wrapper = cachetools.cached(cache, info=True)(self.func)
self.assertEqual(wrapper.cache_info(), (0, 0, None, 0))
self.assertEqual(wrapper(0), 0)
self.assertEqual(wrapper.cache_info(), (0, 1, None, 1))
self.assertEqual(wrapper(1), 1)
self.assertEqual(wrapper.cache_info(), (0, 2, None, 2))
self.assertEqual(wrapper(0), 0)
self.assertEqual(wrapper.cache_info(), (1, 2, None, 2))
wrapper.cache_clear()
self.assertEqual(len(cache), 0)
self.assertEqual(wrapper.cache_info(), (0, 0, None, 0))
class NoneWrapperTest(unittest.TestCase):
def func(self, *args, **kwargs):
return args + tuple(kwargs.items())
def test_decorator(self):
wrapper = cachetools.cached(None)(self.func)
self.assertEqual(wrapper(0), (0,))
self.assertEqual(wrapper(1), (1,))
self.assertEqual(wrapper(1, foo="bar"), (1, ("foo", "bar")))
def test_decorator_attributes(self):
wrapper = cachetools.cached(None)(self.func)
self.assertIs(wrapper.cache, None)
self.assertIs(wrapper.cache_key, cachetools.keys.hashkey)
self.assertIs(wrapper.cache_lock, None)
def test_decorator_clear(self):
wrapper = cachetools.cached(None)(self.func)
wrapper.cache_clear() # no-op
def test_decorator_info(self):
wrapper = cachetools.cached(None, info=True)(self.func)
self.assertEqual(wrapper.cache_info(), (0, 0, 0, 0))
self.assertEqual(wrapper(0), (0,))
self.assertEqual(wrapper.cache_info(), (0, 1, 0, 0))
self.assertEqual(wrapper(1), (1,))
self.assertEqual(wrapper.cache_info(), (0, 2, 0, 0))
wrapper.cache_clear()
self.assertEqual(wrapper.cache_info(), (0, 0, 0, 0))

View File

@@ -0,0 +1,234 @@
import unittest
from cachetools import LRUCache, cachedmethod, keys
class Cached:
def __init__(self, cache, count=0):
self.cache = cache
self.count = count
@cachedmethod(lambda self: self.cache)
def get(self, value):
count = self.count
self.count += 1
return count
@cachedmethod(lambda self: self.cache, key=keys.typedmethodkey)
def get_typedmethod(self, value):
count = self.count
self.count += 1
return count
class Locked:
def __init__(self, cache):
self.cache = cache
self.count = 0
@cachedmethod(lambda self: self.cache, lock=lambda self: self)
def get(self, value):
return self.count
def __enter__(self):
self.count += 1
def __exit__(self, *exc):
pass
class Unhashable:
def __init__(self, cache):
self.cache = cache
@cachedmethod(lambda self: self.cache)
def get_default(self, value):
return value
@cachedmethod(lambda self: self.cache, key=keys.hashkey)
def get_hashkey(self, value):
return value
# https://github.com/tkem/cachetools/issues/107
def __hash__(self):
raise TypeError("unhashable type")
class CachedMethodTest(unittest.TestCase):
def test_dict(self):
cached = Cached({})
self.assertEqual(cached.get(0), 0)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1.0), 1)
self.assertEqual(cached.get(1.0), 1)
cached.cache.clear()
self.assertEqual(cached.get(1), 2)
def test_typedmethod_dict(self):
cached = Cached(LRUCache(maxsize=2))
self.assertEqual(cached.get_typedmethod(0), 0)
self.assertEqual(cached.get_typedmethod(1), 1)
self.assertEqual(cached.get_typedmethod(1), 1)
self.assertEqual(cached.get_typedmethod(1.0), 2)
self.assertEqual(cached.get_typedmethod(1.0), 2)
self.assertEqual(cached.get_typedmethod(0.0), 3)
self.assertEqual(cached.get_typedmethod(0), 4)
def test_lru(self):
cached = Cached(LRUCache(maxsize=2))
self.assertEqual(cached.get(0), 0)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1.0), 1)
self.assertEqual(cached.get(1.0), 1)
cached.cache.clear()
self.assertEqual(cached.get(1), 2)
def test_typedmethod_lru(self):
cached = Cached(LRUCache(maxsize=2))
self.assertEqual(cached.get_typedmethod(0), 0)
self.assertEqual(cached.get_typedmethod(1), 1)
self.assertEqual(cached.get_typedmethod(1), 1)
self.assertEqual(cached.get_typedmethod(1.0), 2)
self.assertEqual(cached.get_typedmethod(1.0), 2)
self.assertEqual(cached.get_typedmethod(0.0), 3)
self.assertEqual(cached.get_typedmethod(0), 4)
def test_nospace(self):
cached = Cached(LRUCache(maxsize=0))
self.assertEqual(cached.get(0), 0)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1), 2)
self.assertEqual(cached.get(1.0), 3)
self.assertEqual(cached.get(1.0), 4)
def test_nocache(self):
cached = Cached(None)
self.assertEqual(cached.get(0), 0)
self.assertEqual(cached.get(1), 1)
self.assertEqual(cached.get(1), 2)
self.assertEqual(cached.get(1.0), 3)
self.assertEqual(cached.get(1.0), 4)
def test_weakref(self):
import weakref
import fractions
import gc
# in Python 3.7, `int` does not support weak references even
# when subclassed, but Fraction apparently does...
class Int(fractions.Fraction):
def __add__(self, other):
return Int(fractions.Fraction.__add__(self, other))
cached = Cached(weakref.WeakValueDictionary(), count=Int(0))
self.assertEqual(cached.get(0), 0)
gc.collect()
self.assertEqual(cached.get(0), 1)
ref = cached.get(1)
self.assertEqual(ref, 2)
self.assertEqual(cached.get(1), 2)
self.assertEqual(cached.get(1.0), 2)
ref = cached.get_typedmethod(1)
self.assertEqual(ref, 3)
self.assertEqual(cached.get_typedmethod(1), 3)
self.assertEqual(cached.get_typedmethod(1.0), 4)
cached.cache.clear()
self.assertEqual(cached.get(1), 5)
def test_locked_dict(self):
cached = Locked({})
self.assertEqual(cached.get(0), 1)
self.assertEqual(cached.get(1), 3)
self.assertEqual(cached.get(1), 3)
self.assertEqual(cached.get(1.0), 3)
self.assertEqual(cached.get(2.0), 7)
def test_locked_nocache(self):
cached = Locked(None)
self.assertEqual(cached.get(0), 0)
self.assertEqual(cached.get(1), 0)
self.assertEqual(cached.get(1), 0)
self.assertEqual(cached.get(1.0), 0)
self.assertEqual(cached.get(1.0), 0)
def test_locked_nospace(self):
cached = Locked(LRUCache(maxsize=0))
self.assertEqual(cached.get(0), 1)
self.assertEqual(cached.get(1), 3)
self.assertEqual(cached.get(1), 5)
self.assertEqual(cached.get(1.0), 7)
self.assertEqual(cached.get(1.0), 9)
def test_unhashable(self):
cached = Unhashable(LRUCache(maxsize=0))
self.assertEqual(cached.get_default(0), 0)
self.assertEqual(cached.get_default(1), 1)
with self.assertRaises(TypeError):
cached.get_hashkey(0)
def test_wrapped(self):
cache = {}
cached = Cached(cache)
self.assertEqual(len(cache), 0)
self.assertEqual(cached.get.__wrapped__(cached, 0), 0)
self.assertEqual(len(cache), 0)
self.assertEqual(cached.get(0), 1)
self.assertEqual(len(cache), 1)
self.assertEqual(cached.get(0), 1)
self.assertEqual(len(cache), 1)
def test_attributes(self):
cache = {}
cached = Cached(cache)
self.assertIs(cached.get.cache(cached), cache)
self.assertIs(cached.get.cache_key, keys.methodkey)
self.assertIs(cached.get.cache_lock, None)
def test_attributes_lock(self):
cache = {}
cached = Locked(cache)
self.assertIs(cached.get.cache(cached), cache)
self.assertIs(cached.get.cache_key, keys.methodkey)
self.assertIs(cached.get.cache_lock(cached), cached)
def test_clear(self):
cache = {}
cached = Cached(cache)
self.assertEqual(cached.get(0), 0)
self.assertEqual(len(cache), 1)
cached.get.cache_clear(cached)
self.assertEqual(len(cache), 0)
def test_clear_locked(self):
cache = {}
cached = Locked(cache)
self.assertEqual(cached.get(0), 1)
self.assertEqual(len(cache), 1)
self.assertEqual(cached.count, 2)
cached.get.cache_clear(cached)
self.assertEqual(len(cache), 0)
self.assertEqual(cached.count, 3)

View File

@@ -0,0 +1,56 @@
import unittest
from cachetools import FIFOCache
from . import CacheTestMixin
class LRUCacheTest(unittest.TestCase, CacheTestMixin):
Cache = FIFOCache
def test_fifo(self):
cache = FIFOCache(maxsize=2)
cache[1] = 1
cache[2] = 2
cache[3] = 3
self.assertEqual(len(cache), 2)
self.assertEqual(cache[2], 2)
self.assertEqual(cache[3], 3)
self.assertNotIn(1, cache)
cache[2]
cache[4] = 4
self.assertEqual(len(cache), 2)
self.assertEqual(cache[3], 3)
self.assertEqual(cache[4], 4)
self.assertNotIn(2, cache)
cache[5] = 5
self.assertEqual(len(cache), 2)
self.assertEqual(cache[4], 4)
self.assertEqual(cache[5], 5)
self.assertNotIn(3, cache)
def test_fifo_getsizeof(self):
cache = FIFOCache(maxsize=3, getsizeof=lambda x: x)
cache[1] = 1
cache[2] = 2
self.assertEqual(len(cache), 2)
self.assertEqual(cache[1], 1)
self.assertEqual(cache[2], 2)
cache[3] = 3
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
with self.assertRaises(ValueError):
cache[4] = 4
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)

View File

@@ -0,0 +1,131 @@
import unittest
import cachetools.func
class DecoratorTestMixin:
def decorator(self, maxsize, **kwargs):
return self.DECORATOR(maxsize, **kwargs)
def test_decorator(self):
cached = self.decorator(maxsize=2)(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": 2, "typed": False})
self.assertEqual(cached.cache_info(), (0, 0, 2, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 2, 1))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (1, 1, 2, 1))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (2, 1, 2, 1))
def test_decorator_clear(self):
cached = self.decorator(maxsize=2)(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": 2, "typed": False})
self.assertEqual(cached.cache_info(), (0, 0, 2, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 2, 1))
cached.cache_clear()
self.assertEqual(cached.cache_info(), (0, 0, 2, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 2, 1))
def test_decorator_nocache(self):
cached = self.decorator(maxsize=0)(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": 0, "typed": False})
self.assertEqual(cached.cache_info(), (0, 0, 0, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 0, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 2, 0, 0))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (0, 3, 0, 0))
def test_decorator_unbound(self):
cached = self.decorator(maxsize=None)(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": None, "typed": False})
self.assertEqual(cached.cache_info(), (0, 0, None, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, None, 1))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (1, 1, None, 1))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (2, 1, None, 1))
def test_decorator_typed(self):
cached = self.decorator(maxsize=2, typed=True)(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": 2, "typed": True})
self.assertEqual(cached.cache_info(), (0, 0, 2, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 2, 1))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (1, 1, 2, 1))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (1, 2, 2, 2))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (2, 2, 2, 2))
def test_decorator_user_function(self):
cached = self.decorator(lambda n: n)
self.assertEqual(cached.cache_parameters(), {"maxsize": 128, "typed": False})
self.assertEqual(cached.cache_info(), (0, 0, 128, 0))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (0, 1, 128, 1))
self.assertEqual(cached(1), 1)
self.assertEqual(cached.cache_info(), (1, 1, 128, 1))
self.assertEqual(cached(1.0), 1.0)
self.assertEqual(cached.cache_info(), (2, 1, 128, 1))
def test_decorator_needs_rlock(self):
cached = self.decorator(lambda n: n)
class RecursiveEquals:
def __init__(self, use_cache):
self._use_cache = use_cache
def __hash__(self):
return hash(self._use_cache)
def __eq__(self, other):
if self._use_cache:
# This call will happen while the cache-lock is held,
# requiring a reentrant lock to avoid deadlock.
cached(self)
return self._use_cache == other._use_cache
# Prime the cache.
cached(RecursiveEquals(False))
cached(RecursiveEquals(True))
# Then do a call which will cause a deadlock with a non-reentrant lock.
self.assertEqual(cached(RecursiveEquals(True)), RecursiveEquals(True))
class FIFODecoratorTest(unittest.TestCase, DecoratorTestMixin):
DECORATOR = staticmethod(cachetools.func.fifo_cache)
class LFUDecoratorTest(unittest.TestCase, DecoratorTestMixin):
DECORATOR = staticmethod(cachetools.func.lfu_cache)
class LRUDecoratorTest(unittest.TestCase, DecoratorTestMixin):
DECORATOR = staticmethod(cachetools.func.lru_cache)
class MRUDecoratorTest(unittest.TestCase, DecoratorTestMixin):
def decorator(self, maxsize, **kwargs):
import warnings
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
d = cachetools.func.mru_cache(maxsize, **kwargs)
self.assertNotEqual(len(w), 0)
self.assertIs(w[0].category, DeprecationWarning)
return d
class RRDecoratorTest(unittest.TestCase, DecoratorTestMixin):
DECORATOR = staticmethod(cachetools.func.rr_cache)
class TTLDecoratorTest(unittest.TestCase, DecoratorTestMixin):
DECORATOR = staticmethod(cachetools.func.ttl_cache)

View File

@@ -0,0 +1,92 @@
import unittest
import cachetools.keys
class CacheKeysTest(unittest.TestCase):
def test_hashkey(self, key=cachetools.keys.hashkey):
self.assertEqual(key(), key())
self.assertEqual(hash(key()), hash(key()))
self.assertEqual(key(1, 2, 3), key(1, 2, 3))
self.assertEqual(hash(key(1, 2, 3)), hash(key(1, 2, 3)))
self.assertEqual(key(1, 2, 3, x=0), key(1, 2, 3, x=0))
self.assertEqual(hash(key(1, 2, 3, x=0)), hash(key(1, 2, 3, x=0)))
self.assertNotEqual(key(1, 2, 3), key(3, 2, 1))
self.assertNotEqual(key(1, 2, 3), key(1, 2, 3, x=None))
self.assertNotEqual(key(1, 2, 3, x=0), key(1, 2, 3, x=None))
self.assertNotEqual(key(1, 2, 3, x=0), key(1, 2, 3, y=0))
with self.assertRaises(TypeError):
hash(key({}))
# untyped keys compare equal
self.assertEqual(key(1, 2, 3), key(1.0, 2.0, 3.0))
self.assertEqual(hash(key(1, 2, 3)), hash(key(1.0, 2.0, 3.0)))
def methodkey(self, key=cachetools.keys.methodkey):
# similar to hashkey(), but ignores its first positional argument
self.assertEqual(key("x"), key("y"))
self.assertEqual(hash(key("x")), hash(key("y")))
self.assertEqual(key("x", 1, 2, 3), key("y", 1, 2, 3))
self.assertEqual(hash(key("x", 1, 2, 3)), hash(key("y", 1, 2, 3)))
self.assertEqual(key("x", 1, 2, 3, x=0), key("y", 1, 2, 3, x=0))
self.assertEqual(hash(key("x", 1, 2, 3, x=0)), hash(key("y", 1, 2, 3, x=0)))
self.assertNotEqual(key("x", 1, 2, 3), key("x", 3, 2, 1))
self.assertNotEqual(key("x", 1, 2, 3), key("x", 1, 2, 3, x=None))
self.assertNotEqual(key("x", 1, 2, 3, x=0), key("x", 1, 2, 3, x=None))
self.assertNotEqual(key("x", 1, 2, 3, x=0), key("x", 1, 2, 3, y=0))
with self.assertRaises(TypeError):
hash("x", key({}))
# untyped keys compare equal
self.assertEqual(key("x", 1, 2, 3), key("y", 1.0, 2.0, 3.0))
self.assertEqual(hash(key("x", 1, 2, 3)), hash(key("y", 1.0, 2.0, 3.0)))
def test_typedkey(self, key=cachetools.keys.typedkey):
self.assertEqual(key(), key())
self.assertEqual(hash(key()), hash(key()))
self.assertEqual(key(1, 2, 3), key(1, 2, 3))
self.assertEqual(hash(key(1, 2, 3)), hash(key(1, 2, 3)))
self.assertEqual(key(1, 2, 3, x=0), key(1, 2, 3, x=0))
self.assertEqual(hash(key(1, 2, 3, x=0)), hash(key(1, 2, 3, x=0)))
self.assertNotEqual(key(1, 2, 3), key(3, 2, 1))
self.assertNotEqual(key(1, 2, 3), key(1, 2, 3, x=None))
self.assertNotEqual(key(1, 2, 3, x=0), key(1, 2, 3, x=None))
self.assertNotEqual(key(1, 2, 3, x=0), key(1, 2, 3, y=0))
with self.assertRaises(TypeError):
hash(key({}))
# typed keys compare unequal
self.assertNotEqual(key(1, 2, 3), key(1.0, 2.0, 3.0))
def test_typedmethodkey(self, key=cachetools.keys.typedmethodkey):
# similar to typedkey(), but ignores its first positional argument
self.assertEqual(key("x"), key("y"))
self.assertEqual(hash(key("x")), hash(key("y")))
self.assertEqual(key("x", 1, 2, 3), key("y", 1, 2, 3))
self.assertEqual(hash(key("x", 1, 2, 3)), hash(key("y", 1, 2, 3)))
self.assertEqual(key("x", 1, 2, 3, x=0), key("y", 1, 2, 3, x=0))
self.assertEqual(hash(key("x", 1, 2, 3, x=0)), hash(key("y", 1, 2, 3, x=0)))
self.assertNotEqual(key("x", 1, 2, 3), key("x", 3, 2, 1))
self.assertNotEqual(key("x", 1, 2, 3), key("x", 1, 2, 3, x=None))
self.assertNotEqual(key("x", 1, 2, 3, x=0), key("x", 1, 2, 3, x=None))
self.assertNotEqual(key("x", 1, 2, 3, x=0), key("x", 1, 2, 3, y=0))
with self.assertRaises(TypeError):
hash(key("x", {}))
# typed keys compare unequal
self.assertNotEqual(key("x", 1, 2, 3), key("x", 1.0, 2.0, 3.0))
def test_addkeys(self, key=cachetools.keys.hashkey):
self.assertIsInstance(key(), tuple)
self.assertIsInstance(key(1, 2, 3) + key(4, 5, 6), type(key()))
self.assertIsInstance(key(1, 2, 3) + (4, 5, 6), type(key()))
self.assertIsInstance((1, 2, 3) + key(4, 5, 6), type(key()))
def test_pickle(self, key=cachetools.keys.hashkey):
import pickle
for k in [key(), key("abc"), key("abc", 123), key("abc", q="abc")]:
# white-box test: assert cached hash value is not pickled
self.assertEqual(len(k.__dict__), 0)
h = hash(k)
self.assertEqual(len(k.__dict__), 1)
pickled = pickle.loads(pickle.dumps(k))
self.assertEqual(len(pickled.__dict__), 0)
self.assertEqual(k, pickled)
self.assertEqual(h, hash(pickled))

View File

@@ -0,0 +1,49 @@
import unittest
from cachetools import LFUCache
from . import CacheTestMixin
class LFUCacheTest(unittest.TestCase, CacheTestMixin):
Cache = LFUCache
def test_lfu(self):
cache = LFUCache(maxsize=2)
cache[1] = 1
cache[1]
cache[2] = 2
cache[3] = 3
self.assertEqual(len(cache), 2)
self.assertEqual(cache[1], 1)
self.assertTrue(2 in cache or 3 in cache)
self.assertTrue(2 not in cache or 3 not in cache)
cache[4] = 4
self.assertEqual(len(cache), 2)
self.assertEqual(cache[4], 4)
self.assertEqual(cache[1], 1)
def test_lfu_getsizeof(self):
cache = LFUCache(maxsize=3, getsizeof=lambda x: x)
cache[1] = 1
cache[2] = 2
self.assertEqual(len(cache), 2)
self.assertEqual(cache[1], 1)
self.assertEqual(cache[2], 2)
cache[3] = 3
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
with self.assertRaises(ValueError):
cache[4] = 4
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)

View File

@@ -0,0 +1,56 @@
import unittest
from cachetools import LRUCache
from . import CacheTestMixin
class LRUCacheTest(unittest.TestCase, CacheTestMixin):
Cache = LRUCache
def test_lru(self):
cache = LRUCache(maxsize=2)
cache[1] = 1
cache[2] = 2
cache[3] = 3
self.assertEqual(len(cache), 2)
self.assertEqual(cache[2], 2)
self.assertEqual(cache[3], 3)
self.assertNotIn(1, cache)
cache[2]
cache[4] = 4
self.assertEqual(len(cache), 2)
self.assertEqual(cache[2], 2)
self.assertEqual(cache[4], 4)
self.assertNotIn(3, cache)
cache[5] = 5
self.assertEqual(len(cache), 2)
self.assertEqual(cache[4], 4)
self.assertEqual(cache[5], 5)
self.assertNotIn(2, cache)
def test_lru_getsizeof(self):
cache = LRUCache(maxsize=3, getsizeof=lambda x: x)
cache[1] = 1
cache[2] = 2
self.assertEqual(len(cache), 2)
self.assertEqual(cache[1], 1)
self.assertEqual(cache[2], 2)
cache[3] = 3
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
with self.assertRaises(ValueError):
cache[4] = 4
self.assertEqual(len(cache), 1)
self.assertEqual(cache[3], 3)

View File

@@ -0,0 +1,63 @@
import unittest
import warnings
from cachetools import MRUCache
from . import CacheTestMixin
class MRUCacheTest(unittest.TestCase, CacheTestMixin):
# TODO: method to create cache that can be overridden
Cache = MRUCache
def test_evict__writes_only(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
cache = MRUCache(maxsize=2)
self.assertEqual(len(w), 1)
self.assertIs(w[0].category, DeprecationWarning)
cache[1] = 1
cache[2] = 2
cache[3] = 3 # Evicts 1 because nothing's been used yet
assert len(cache) == 2
assert 1 not in cache, "Wrong key was evicted. Should have been '1'."
assert 2 in cache
assert 3 in cache
def test_evict__with_access(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
cache = MRUCache(maxsize=2)
self.assertEqual(len(w), 1)
self.assertIs(w[0].category, DeprecationWarning)
cache[1] = 1
cache[2] = 2
cache[1]
cache[2]
cache[3] = 3 # Evicts 2
assert 2 not in cache, "Wrong key was evicted. Should have been '2'."
assert 1 in cache
assert 3 in cache
def test_evict__with_delete(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
cache = MRUCache(maxsize=2)
self.assertEqual(len(w), 1)
self.assertIs(w[0].category, DeprecationWarning)
cache[1] = 1
cache[2] = 2
del cache[2]
cache[3] = 3 # Doesn't evict anything because we just deleted 2
assert 2 not in cache
assert 1 in cache
cache[4] = 4 # Should evict 1 as we just accessed it with __contains__
assert 1 not in cache
assert 3 in cache
assert 4 in cache

View File

@@ -0,0 +1,34 @@
import unittest
from cachetools import RRCache
from . import CacheTestMixin
class RRCacheTest(unittest.TestCase, CacheTestMixin):
Cache = RRCache
def test_rr(self):
cache = RRCache(maxsize=2, choice=min)
self.assertEqual(min, cache.choice)
cache[1] = 1
cache[2] = 2
cache[3] = 3
self.assertEqual(2, len(cache))
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
self.assertNotIn(1, cache)
cache[0] = 0
self.assertEqual(2, len(cache))
self.assertEqual(0, cache[0])
self.assertEqual(3, cache[3])
self.assertNotIn(2, cache)
cache[4] = 4
self.assertEqual(2, len(cache))
self.assertEqual(3, cache[3])
self.assertEqual(4, cache[4])
self.assertNotIn(0, cache)

View File

@@ -0,0 +1,271 @@
import math
import unittest
from cachetools import TLRUCache
from . import CacheTestMixin
class Timer:
def __init__(self, auto=False):
self.auto = auto
self.time = 0
def __call__(self):
if self.auto:
self.time += 1
return self.time
def tick(self):
self.time += 1
class TLRUTestCache(TLRUCache):
def default_ttu(_key, _value, _time):
return math.inf
def __init__(self, maxsize, ttu=default_ttu, **kwargs):
TLRUCache.__init__(self, maxsize, ttu, timer=Timer(), **kwargs)
class TLRUCacheTest(unittest.TestCase, CacheTestMixin):
Cache = TLRUTestCache
def test_ttu(self):
cache = TLRUCache(maxsize=6, ttu=lambda _, v, t: t + v + 1, timer=Timer())
self.assertEqual(0, cache.timer())
self.assertEqual(3, cache.ttu(None, 1, 1))
cache[1] = 1
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual({1}, set(cache))
cache.timer.tick()
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual({1}, set(cache))
cache[2] = 2
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(2, len(cache))
self.assertEqual({1, 2}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(1, len(cache))
self.assertEqual({2}, set(cache))
cache[3] = 3
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
self.assertEqual(2, len(cache))
self.assertEqual({2, 3}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
self.assertEqual(2, len(cache))
self.assertEqual({2, 3}, set(cache))
cache[1] = 1
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
self.assertEqual(3, len(cache))
self.assertEqual({1, 2, 3}, set(cache))
cache.timer.tick()
self.assertEqual(1, cache[1])
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
self.assertEqual(2, len(cache))
self.assertEqual({1, 3}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
self.assertEqual(1, len(cache))
self.assertEqual({3}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
with self.assertRaises(KeyError):
del cache[1]
with self.assertRaises(KeyError):
cache.pop(2)
with self.assertRaises(KeyError):
del cache[3]
self.assertEqual(0, len(cache))
self.assertEqual(set(), set(cache))
def test_ttu_lru(self):
cache = TLRUCache(maxsize=2, ttu=lambda k, v, t: t + 1, timer=Timer())
self.assertEqual(0, cache.timer())
self.assertEqual(2, cache.ttu(None, None, 1))
cache[1] = 1
cache[2] = 2
cache[3] = 3
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertEqual(cache[2], 2)
self.assertEqual(cache[3], 3)
cache[2]
cache[4] = 4
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertEqual(cache[2], 2)
self.assertNotIn(3, cache)
self.assertEqual(cache[4], 4)
cache[5] = 5
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
self.assertEqual(cache[4], 4)
self.assertEqual(cache[5], 5)
def test_ttu_expire(self):
cache = TLRUCache(maxsize=3, ttu=lambda k, v, t: t + 3, timer=Timer())
with cache.timer as time:
self.assertEqual(time, cache.timer())
cache[1] = 1
cache.timer.tick()
cache[2] = 2
cache.timer.tick()
cache[3] = 3
self.assertEqual(2, cache.timer())
self.assertEqual({1, 2, 3}, set(cache))
self.assertEqual(3, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire()
self.assertEqual(set(), set(items))
self.assertEqual({1, 2, 3}, set(cache))
self.assertEqual(3, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire(3)
self.assertEqual({(1, 1)}, set(items))
self.assertEqual({2, 3}, set(cache))
self.assertEqual(2, len(cache))
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire(4)
self.assertEqual({(2, 2)}, set(items))
self.assertEqual({3}, set(cache))
self.assertEqual(1, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
items = cache.expire(5)
self.assertEqual({(3, 3)}, set(items))
self.assertEqual(set(), set(cache))
self.assertEqual(0, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
def test_ttu_expired(self):
cache = TLRUCache(maxsize=1, ttu=lambda k, _, t: t + k, timer=Timer())
cache[1] = None
self.assertEqual(cache[1], None)
self.assertEqual(1, len(cache))
cache[0] = None
self.assertNotIn(0, cache)
self.assertEqual(cache[1], None)
self.assertEqual(1, len(cache))
cache[-1] = None
self.assertNotIn(-1, cache)
self.assertNotIn(0, cache)
self.assertEqual(cache[1], None)
self.assertEqual(1, len(cache))
def test_ttu_atomic(self):
cache = TLRUCache(maxsize=1, ttu=lambda k, v, t: t + 2, timer=Timer(auto=True))
cache[1] = 1
self.assertEqual(1, cache[1])
cache[1] = 1
self.assertEqual(1, cache.get(1))
cache[1] = 1
self.assertEqual(1, cache.pop(1))
cache[1] = 1
self.assertEqual(1, cache.setdefault(1))
cache[1] = 1
cache.clear()
self.assertEqual(0, len(cache))
def test_ttu_tuple_key(self):
cache = TLRUCache(maxsize=1, ttu=lambda k, v, t: t + 1, timer=Timer())
cache[(1, 2, 3)] = 42
self.assertEqual(42, cache[(1, 2, 3)])
cache.timer.tick()
with self.assertRaises(KeyError):
cache[(1, 2, 3)]
self.assertNotIn((1, 2, 3), cache)
def test_ttu_reverse_insert(self):
cache = TLRUCache(maxsize=4, ttu=lambda k, v, t: t + v, timer=Timer())
self.assertEqual(0, cache.timer())
cache[3] = 3
cache[2] = 2
cache[1] = 1
cache[0] = 0
self.assertEqual({1, 2, 3}, set(cache))
self.assertEqual(3, len(cache))
self.assertNotIn(0, cache)
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
cache.timer.tick()
self.assertEqual({2, 3}, set(cache))
self.assertEqual(2, len(cache))
self.assertNotIn(0, cache)
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
cache.timer.tick()
self.assertEqual({3}, set(cache))
self.assertEqual(1, len(cache))
self.assertNotIn(0, cache)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
cache.timer.tick()
self.assertEqual(set(), set(cache))
self.assertEqual(0, len(cache))
self.assertNotIn(0, cache)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)

View File

@@ -0,0 +1,203 @@
import math
import unittest
from cachetools import TTLCache
from . import CacheTestMixin
class Timer:
def __init__(self, auto=False):
self.auto = auto
self.time = 0
def __call__(self):
if self.auto:
self.time += 1
return self.time
def tick(self):
self.time += 1
class TTLTestCache(TTLCache):
def __init__(self, maxsize, ttl=math.inf, **kwargs):
TTLCache.__init__(self, maxsize, ttl=ttl, timer=Timer(), **kwargs)
class TTLCacheTest(unittest.TestCase, CacheTestMixin):
Cache = TTLTestCache
def test_ttl(self):
cache = TTLCache(maxsize=2, ttl=2, timer=Timer())
self.assertEqual(0, cache.timer())
self.assertEqual(2, cache.ttl)
cache[1] = 1
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual({1}, set(cache))
cache.timer.tick()
self.assertEqual(1, cache[1])
self.assertEqual(1, len(cache))
self.assertEqual({1}, set(cache))
cache[2] = 2
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(2, len(cache))
self.assertEqual({1, 2}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(1, len(cache))
self.assertEqual({2}, set(cache))
cache[3] = 3
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
self.assertEqual(2, len(cache))
self.assertEqual({2, 3}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
self.assertEqual(1, len(cache))
self.assertEqual({3}, set(cache))
cache.timer.tick()
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
with self.assertRaises(KeyError):
del cache[1]
with self.assertRaises(KeyError):
cache.pop(2)
with self.assertRaises(KeyError):
del cache[3]
self.assertEqual(0, len(cache))
self.assertEqual(set(), set(cache))
def test_ttl_lru(self):
cache = TTLCache(maxsize=2, ttl=1, timer=Timer())
cache[1] = 1
cache[2] = 2
cache[3] = 3
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertEqual(cache[2], 2)
self.assertEqual(cache[3], 3)
cache[2]
cache[4] = 4
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertEqual(cache[2], 2)
self.assertNotIn(3, cache)
self.assertEqual(cache[4], 4)
cache[5] = 5
self.assertEqual(len(cache), 2)
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
self.assertEqual(cache[4], 4)
self.assertEqual(cache[5], 5)
def test_ttl_expire(self):
cache = TTLCache(maxsize=3, ttl=3, timer=Timer())
with cache.timer as time:
self.assertEqual(time, cache.timer())
self.assertEqual(3, cache.ttl)
cache[1] = 1
cache.timer.tick()
cache[2] = 2
cache.timer.tick()
cache[3] = 3
self.assertEqual(2, cache.timer())
self.assertEqual({1, 2, 3}, set(cache))
self.assertEqual(3, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire()
self.assertEqual(set(), set(items))
self.assertEqual({1, 2, 3}, set(cache))
self.assertEqual(3, len(cache))
self.assertEqual(1, cache[1])
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire(3)
self.assertEqual({(1, 1)}, set(items))
self.assertEqual({2, 3}, set(cache))
self.assertEqual(2, len(cache))
self.assertNotIn(1, cache)
self.assertEqual(2, cache[2])
self.assertEqual(3, cache[3])
items = cache.expire(4)
self.assertEqual({(2, 2)}, set(items))
self.assertEqual({3}, set(cache))
self.assertEqual(1, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertEqual(3, cache[3])
items = cache.expire(5)
self.assertEqual({(3, 3)}, set(items))
self.assertEqual(set(), set(cache))
self.assertEqual(0, len(cache))
self.assertNotIn(1, cache)
self.assertNotIn(2, cache)
self.assertNotIn(3, cache)
def test_ttl_atomic(self):
cache = TTLCache(maxsize=1, ttl=2, timer=Timer(auto=True))
cache[1] = 1
self.assertEqual(1, cache[1])
cache[1] = 1
self.assertEqual(1, cache.get(1))
cache[1] = 1
self.assertEqual(1, cache.pop(1))
cache[1] = 1
self.assertEqual(1, cache.setdefault(1))
cache[1] = 1
cache.clear()
self.assertEqual(0, len(cache))
def test_ttl_tuple_key(self):
cache = TTLCache(maxsize=1, ttl=1, timer=Timer())
self.assertEqual(1, cache.ttl)
cache[(1, 2, 3)] = 42
self.assertEqual(42, cache[(1, 2, 3)])
cache.timer.tick()
with self.assertRaises(KeyError):
cache[(1, 2, 3)]
self.assertNotIn((1, 2, 3), cache)
def test_ttl_datetime(self):
from datetime import datetime, timedelta
cache = TTLCache(maxsize=1, ttl=timedelta(days=1), timer=datetime.now)
cache[1] = 1
self.assertEqual(1, len(cache))
items = cache.expire(datetime.now())
self.assertEqual([], list(items))
self.assertEqual(1, len(cache))
items = cache.expire(datetime.now() + timedelta(days=1))
self.assertEqual([(1, 1)], list(items))
self.assertEqual(0, len(cache))

View File

@@ -0,0 +1,40 @@
[tox]
envlist = check-manifest,docs,doctest,flake8,py
[testenv]
deps =
pytest
pytest-cov
commands =
py.test --basetemp={envtmpdir} --cov=cachetools {posargs}
[testenv:check-manifest]
deps =
check-manifest==0.44; python_version < "3.8"
check-manifest; python_version >= "3.8"
commands =
check-manifest
skip_install = true
[testenv:docs]
deps =
sphinx
commands =
sphinx-build -W -b html -d {envtmpdir}/doctrees docs {envtmpdir}/html
[testenv:doctest]
deps =
sphinx
commands =
sphinx-build -W -b doctest -d {envtmpdir}/doctrees docs {envtmpdir}/doctest
[testenv:flake8]
deps =
flake8
flake8-black; implementation_name == "cpython"
black==22.12.0; implementation_name == "cpython" and python_version < "3.8"
flake8-bugbear
flake8-import-order
commands =
flake8
skip_install = true