Redwood binary options.
This is a universal Python binding for the LMDB 'Lightning' Database. Two variants are provided and automatically selected during install: a CFFI variant that supports PyPy and all versions of CPython >=2.7, and a C extension that supports CPython >=2.7 and >=3.4. Both variants provide the same interface.
LMDB is a tiny database with some excellent properties:
Ordered map interface (keys are always lexicographically sorted). Reader/writer transactions: readers don't block writers, writers don't block readers. Each environment supports one concurrent write transaction. Read transactions are extremely cheap. Environments may be opened by multiple processes on the same host, making it ideal for working around Python's GIL. Multiple named databases may be created with transactions covering all named databases. Memory mapped, allowing for zero copy lookup and iteration. This is optionally exposed to Python using the buffer() interface. Maintenance requires no external process or background threads. No application-level caching is required: LMDB fully exploits the operating system's buffer cache.
Installation: Windows¶
Binary eggs and wheels are published via PyPI for Windows, allowing the binding to be installed via pip and easy_install without the need for a compiler to be present. The binary releases statically link against the bundled version of LMDB.
Initially 32-bit and 64-bit binaries are provided for Python 2.7; in future binaries will be published for all supported versions of Python.
To install, use a command like:
Installation: UNIX¶
For convenience, a supported version of LMDB is bundled with the binding and built statically by default. If your system distribution includes LMDB, set the LMDB_FORCE_SYSTEM environment variable, and optionally LMDB_INCLUDEDIR and LMDB_LIBDIR prior to invoking setup.py .
By default, the LMDB library is patched before building. This patch (located at lib/py-lmdb/env-copy-txn.patch ) provides a minor feature: the ability to copy/backup an environment under a particular transaction. If you prefer to bypass the patch, set the environment variable LMDB_PURE .
The CFFI variant depends on CFFI, which in turn depends on libffi , which may need to be installed from a package. On CPython, both variants additionally depend on the CPython development headers. On Debian/Ubuntu:
To install the C extension, ensure a C compiler and pip or easy_install are available and type:
The CFFI variant may be used on CPython by setting the LMDB_FORCE_CFFI environment variable before installation, or before module import with an existing installation:
Getting Help¶
Before getting in contact, please ensure you have thoroughly reviewed this documentation, and if applicable, the associated official Doxygen documentation.
If you have found a bug, please report it on the GitHub issue tracker, or mail it to the list below if you're allergic to GitHub.
For all other problems and related discussion, please direct it to the
[email protected] mailing list. You must be subscribed to post. The list archives are also available.
Named Databases¶
Named databases require the max_dbs= parameter to be provided when calling lmdb.open() or lmdb.Environment . This must be done by the first process or thread opening the environment.
Once a correctly configured Environment is created, new named databases may be created via Environment.open_db() .
Storage efficiency & limits¶
Records are grouped into pages matching the operating system's VM page size, which is usually 4096 bytes. Each page must contain at least 2 records, in addition to 8 bytes per record and a 16 byte header. Due to this the engine is most space-efficient when the combined size of any (8+key+value) combination does not exceed 2040 bytes.
When an attempt to store a record would exceed the maximum size, its value part is written separately to one or more dedicated pages. Since the trailer of the last page containing the record value cannot be shared with other records, it is more efficient when large values are an approximate multiple of 4096 bytes, minus 16 bytes for an initial header.
Space usage can be monitored using Environment.stat() :
This database contains 3,761,848 records and no values were spilled ( overflow_pages ). Environment.stat only return information for the default database. If named databases are used, you must add the results from Transaction.stat on each named database.
By default record keys are limited to 511 bytes in length, however this can be adjusted by rebuilding the library. The compile-time key length can be queried via Environment.max_key_size() .
Memory usage¶
Diagnostic tools often overreport the memory usage of LMDB databases, since the tools poorly classify that memory. The Linux ps command RSS measurement may report a process as having an entire database resident, causing user alarm. While the entire database may really be resident, it is half the story.
Unlike heap memory, pages in file-backed memory maps, such as those used by LMDB, may be efficiently reclaimed by the OS at any moment so long as the pages in the map are clean . Clean simply means that the resident pages' contents match the associated pages that live in the disk file that backs the mapping. A clean mapping works exactly like a cache, and in fact it is a cache: the OS page cache.
On Linux, the /proc//smaps file contains one section for each memory mapping in a process. To inspect the actual memory usage of an LMDB database, look for a data.mdb entry, then observe its Dirty and Clean values.
When no write transaction is active, all pages in an LMDB database should be marked clean , unless the Environment was opened with sync=False , and no explicit Environment.sync() has been called since the last write transaction, and the OS writeback mechanism has not yet opportunistically written the dirty pages to disk.
Bytestrings¶
This documentation uses bytestring to mean either the Python=3.0 bytes() type, depending on the Python version in use.
Due to the design of Python 2.x, LMDB will happily accept Unicode instances where str() instances are expected, so long as they contain only ASCII characters, in which case they are implicitly encoded to ASCII. You should not rely on this behaviour! It results in brittle programs that often break the moment they are deployed in production. Always explicitly encode and decode any Unicode values before passing them to LMDB.
This documentation uses bytes() in examples. In Python 3.x this is a distinct type, whereas in Python 2.7 it is simply an alias for str() .
Buffers¶
Since LMDB is memory mapped it is possible to access record data without keys or values ever being copied by the kernel, database library, or application. To exploit this the library can be instructed to return buffer() objects instead of bytestrings by passing buffers=True to Environment.begin() or Transaction .
In Python buffer() objects can be used in many places where bytestrings are expected. In every way they act like a regular sequence: they support slicing, indexing, iteration, and taking their length. Many Python APIs will automatically convert them to bytestrings as necessary:
It is also possible to pass buffers directly to many native APIs, for example file.write() , socket.send() , zlib.decompress() and so on. A buffer may be sliced without copying by passing it to buffer() :
In both PyPy and CPython, returned buffers must be discarded after their producing transaction has completed or been modified in any way.
For more info about
binary options strategy mt4 g3 look into the internet site.
Source: Lmdb -- lmdb 1.3.0 documentation (http://binaryoptionsreview.space/?qa=436&qa_1=lmdb-lmdb-1-3-0-documentation)